This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 128 succeeded
Started2022-09-08 10:03
Elapsed7h38m
Revision
uploadercrier

No Test Failures!


Show 128 Passed Tests

Error lines from build-log.txt

... skipping 405 lines ...
Looking for address 'kubemark-5000-master-ip'
Looking for address 'kubemark-5000-master-internal-ip'
Using master: kubemark-5000-master (external IP: 35.231.79.197; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-23_kubemark-5000" set.
User "k8s-infra-e2e-boskos-scale-23_kubemark-5000" set.
Context "k8s-infra-e2e-boskos-scale-23_kubemark-5000" created.
Switched to context "k8s-infra-e2e-boskos-scale-23_kubemark-5000".
... skipping 194 lines ...
kubemark-5000-minion-group-z4g1   Ready                         <none>   46s    v1.26.0-alpha.0.439+8287e21228e9b1
kubemark-5000-minion-group-zthx   Ready                         <none>   44s    v1.26.0-alpha.0.439+8287e21228e9b1
kubemark-5000-minion-heapster     Ready                         <none>   65s    v1.26.0-alpha.0.439+8287e21228e9b1
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://35.231.79.197
GLBCDefaultBackend is running at https://35.231.79.197/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://35.231.79.197/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://35.231.79.197/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
... skipping 228 lines ...
Looking for address 'kubemark-5000-kubemark-master-ip'
Looking for address 'kubemark-5000-kubemark-master-internal-ip'
Using master: kubemark-5000-kubemark-master (external IP: 34.139.38.14; internal IP: 10.40.3.216)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-23_kubemark-5000-kubemark" set.
User "k8s-infra-e2e-boskos-scale-23_kubemark-5000-kubemark" set.
Context "k8s-infra-e2e-boskos-scale-23_kubemark-5000-kubemark" created.
Switched to context "k8s-infra-e2e-boskos-scale-23_kubemark-5000-kubemark".
... skipping 20 lines ...
Found 1 node(s).
NAME                            STATUS                     ROLES    AGE   VERSION
kubemark-5000-kubemark-master   Ready,SchedulingDisabled   <none>   15s   v1.26.0-alpha.0.439+8287e21228e9b1
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
Cluster validation succeeded
Done, listing cluster services:
... skipping 5277 lines ...
I0908 10:25:13.809444  283137 prometheus.go:327] Exposing kube-apiserver metrics in the cluster
I0908 10:25:13.946374  283137 framework.go:275] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-endpoints.yaml
I0908 10:25:13.981536  283137 framework.go:275] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-service.yaml
I0908 10:25:14.016417  283137 framework.go:275] Applying /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/prometheus/manifests/master-ip/master-serviceMonitor.yaml
W0908 10:25:14.051931  283137 warnings.go:70] unknown field "spec.selector.k8s-app"
I0908 10:25:14.052105  283137 prometheus.go:406] Waiting for Prometheus stack to become healthy...
W0908 10:25:44.088515  283137 util.go:72] error while calling prometheus api: the server is currently unable to handle the request (get services http:prometheus-k8s:9090), response: k8s

v1Statusg

Failure=no endpoints available for service "http:prometheus-k8s:9090""ServiceUnavailable0�"
I0908 10:26:14.100454  283137 util.go:104] All 7 expected targets are ready
I0908 10:26:14.144550  283137 util.go:104] All 1 expected targets are ready
... skipping 1731 lines ...
I0908 10:51:50.807771  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-61p4t5-44), controlledBy(saturation-deployment-0): Pods: 2816 out of 3000 created, 2811 running (2811 updated), 5 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 10:51:51.263185  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-61p4t5-48), controlledBy(saturation-deployment-0): Pods: 3000 out of 3000 created, 3000 running (3000 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 10:51:52.009382  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-61p4t5-21), controlledBy(saturation-deployment-0): Pods: 3000 out of 3000 created, 3000 running (3000 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 10:51:53.348703  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-61p4t5-33), controlledBy(saturation-deployment-0): Pods: 3000 out of 3000 created, 3000 running (3000 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 10:51:55.205838  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-61p4t5-41), controlledBy(saturation-deployment-0): Pods: 3000 out of 3000 created, 3000 running (3000 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 10:51:55.867303  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-61p4t5-44), controlledBy(saturation-deployment-0): Pods: 3000 out of 3000 created, 3000 running (3000 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 10:51:55.867399  283137 wait_for_controlled_pods.go:320] WaitForControlledPodsRunning: running 50, deleted 0, timeout: 0, failed: 0
I0908 10:51:55.867412  283137 wait_for_controlled_pods.go:325] WaitForControlledPodsRunning: maxDuration=25m4.343893777s, operationTimeout=2h8m0s, ratio=0.20
I0908 10:51:55.867426  283137 wait_for_controlled_pods.go:339] WaitForControlledPodsRunning: 50/50 Deployments are running with all pods
I0908 10:51:55.867488  283137 simple_test_executor.go:171] Step "[step: 04] Waiting for saturation pods to be running" ended
I0908 10:51:55.867511  283137 simple_test_executor.go:149] Step "[step: 05] Collecting saturation pod measurements" started
I0908 10:51:55.867614  283137 scheduling_throughput.go:154] SchedulingThroughput: gathering data
I0908 10:51:55.867629  283137 pod_startup_latency.go:226] PodStartupLatency: labelSelector(group = saturation): gathering pod startup latency measurement...
... skipping 63768 lines ...
I0908 15:33:40.362119  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(latency-deployment-94): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 15:33:40.562804  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(latency-deployment-95): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 15:33:40.767546  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(latency-deployment-96): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 15:33:40.997496  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(latency-deployment-97): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 15:33:41.163351  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(latency-deployment-98): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 15:33:41.363950  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(latency-deployment-99): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 15:33:43.753661  283137 wait_for_controlled_pods.go:320] WaitForControlledPodsRunning: running 5000, deleted 0, timeout: 0, failed: 0
I0908 15:33:43.753716  283137 wait_for_controlled_pods.go:325] WaitForControlledPodsRunning: maxDuration=5.001901574s, operationTimeout=15m0s, ratio=0.01
I0908 15:33:43.753735  283137 wait_for_controlled_pods.go:339] WaitForControlledPodsRunning: 5000/5000 Deployments are running with all pods
I0908 15:33:43.753773  283137 simple_test_executor.go:171] Step "[step: 08] Waiting for latency pods to be running" ended
I0908 15:33:43.753795  283137 simple_test_executor.go:149] Step "[step: 09] Deleting latency pods" started
I0908 15:33:43.803801  283137 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-yner0z-1), controlledBy(latency-deployment-0): starting with timeout: 14m59.999991472s
I0908 15:33:44.003405  283137 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-yner0z-1), controlledBy(latency-deployment-1): starting with timeout: 14m59.999992658s
... skipping 4995 lines ...
I0908 15:50:26.307293  283137 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(latency-deployment-97): starting with timeout: 14m59.999992222s
I0908 15:50:26.507672  283137 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(latency-deployment-98): starting with timeout: 14m59.999990747s
I0908 15:50:26.724284  283137 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(latency-deployment-99): starting with timeout: 14m59.99999095s
I0908 15:50:26.885091  283137 simple_test_executor.go:171] Step "[step: 09] Deleting latency pods" ended
I0908 15:50:26.885137  283137 simple_test_executor.go:149] Step "[step: 10] Waiting for latency pods to be deleted" started
I0908 15:50:26.885165  283137 wait_for_controlled_pods.go:247] WaitForControlledPodsRunning: waiting for controlled pods measurement...
I0908 15:50:31.923791  283137 wait_for_controlled_pods.go:320] WaitForControlledPodsRunning: running 0, deleted 5000, timeout: 0, failed: 0
I0908 15:50:31.923839  283137 wait_for_controlled_pods.go:325] WaitForControlledPodsRunning: maxDuration=5.003632903s, operationTimeout=15m0s, ratio=0.01
I0908 15:50:31.923854  283137 wait_for_controlled_pods.go:339] WaitForControlledPodsRunning: 0/0 Deployments are running with all pods
I0908 15:50:31.923884  283137 simple_test_executor.go:171] Step "[step: 10] Waiting for latency pods to be deleted" ended
I0908 15:50:31.923903  283137 simple_test_executor.go:149] Step "[step: 11] Collecting pod startup latency" started
I0908 15:50:31.923981  283137 pod_startup_latency.go:226] PodStartupLatency: labelSelector(group = latency): gathering pod startup latency measurement...
I0908 15:50:37.156892  283137 phase_latency.go:141] PodStartupLatency: 100 worst pod_startup latencies: [{test-yner0z-24/latency-deployment-55-9cc89d97d-9jtxl 2.169212398s} {test-yner0z-41/latency-deployment-95-5df95cd7f8-h794l 2.169395962s} {test-yner0z-17/latency-deployment-22-56768586b-48fs2 2.170060878s} {test-yner0z-47/latency-deployment-3-f455d494d-v5q9w 2.170478824s} {test-yner0z-45/latency-deployment-4-5448795b78-cjf7l 2.171019814s} {test-yner0z-43/latency-deployment-59-7dbc7dfdb-p4rpn 2.17106694s} {test-yner0z-10/latency-deployment-50-98c98c9b6-bdtz6 2.17302843s} {test-yner0z-2/latency-deployment-7-854597b74d-k2v5t 2.174997656s} {test-yner0z-43/latency-deployment-69-857847754-kg5fh 2.175120833s} {test-yner0z-31/latency-deployment-93-5545b78fcd-x9m7s 2.175275438s} {test-yner0z-28/latency-deployment-9-75d56b8f7b-tt8fb 2.175310941s} {test-yner0z-9/latency-deployment-80-7c8446fb46-q26jk 2.176817361s} {test-yner0z-30/latency-deployment-78-64d498b88b-lmstl 2.177376699s} {test-yner0z-31/latency-deployment-63-57bf85cbbb-9r4bf 2.177779129s} {test-yner0z-41/latency-deployment-60-d487c8cfd-f8tvd 2.178198813s} {test-yner0z-26/latency-deployment-99-7b99559cdb-q9vwk 2.178795235s} {test-yner0z-2/latency-deployment-32-df6c6db6-xt2zx 2.17975179s} {test-yner0z-7/latency-deployment-41-bcd5499fd-r5wrk 2.180252892s} {test-yner0z-38/latency-deployment-6-66fdcf4bf4-dsrsg 2.180932336s} {test-yner0z-22/latency-deployment-6-66fdcf4bf4-fqcv9 2.182003397s} {test-yner0z-30/latency-deployment-83-55767bb4f4-nbbqf 2.18339954s} {test-yner0z-47/latency-deployment-93-5545b78fcd-j9llm 2.183907994s} {test-yner0z-18/latency-deployment-17-5d545c949d-2f6n5 2.184316305s} {test-yner0z-3/latency-deployment-82-7b4485f95d-4qv6t 2.187724882s} {test-yner0z-9/latency-deployment-70-9d8d9dc5d-ddfkc 2.1884617s} {test-yner0z-34/latency-deployment-57-67f98bb958-nvmsz 2.18933605s} {test-yner0z-10/latency-deployment-5-6f85ddbf7b-nkv5g 2.189903409s} {test-yner0z-24/latency-deployment-30-84c754fd58-4r5k4 2.191134649s} {test-yner0z-30/latency-deployment-53-554cc99fc4-6pz2s 2.193278066s} {test-yner0z-24/latency-deployment-45-767cf58fd6-bcn8j 2.194520655s} {test-yner0z-34/latency-deployment-62-5d599684c8-8lndv 2.194919512s} {test-yner0z-23/latency-deployment-35-6c76f759db-cs6m2 2.195325476s} {test-yner0z-30/latency-deployment-63-57bf85cbbb-czds4 2.198539357s} {test-yner0z-47/latency-deployment-32-df6c6db6-l9tvf 2.199881194s} {test-yner0z-16/latency-deployment-78-64d498b88b-hjkgf 2.200428608s} {test-yner0z-5/latency-deployment-81-686d774d98-sgz5m 2.20043143s} {test-yner0z-45/latency-deployment-14-c467b74bb-4rg8z 2.200650675s} {test-yner0z-18/latency-deployment-22-56768586b-l57pv 2.202299549s} {test-yner0z-22/latency-deployment-91-8848bbb-nsq94 2.204301582s} {test-yner0z-28/latency-deployment-99-7b99559cdb-gmrg9 2.205074662s} {test-yner0z-36/latency-deployment-76-58594c695d-96gx2 2.206497891s} {test-yner0z-31/latency-deployment-48-5669fb4654-7tpt6 2.208815161s} {test-yner0z-19/latency-deployment-7-854597b74d-89jth 2.209005107s} {test-yner0z-28/latency-deployment-84-5fbc548b9d-2l9fw 2.209910495s} {test-yner0z-27/latency-deployment-39-7497d5b6-wqvcx 2.210111443s} {test-yner0z-25/latency-deployment-75-7cfbcc5bd6-mbqq6 2.210803839s} {test-yner0z-19/latency-deployment-2-54c4fffc66-zg8t5 2.210897793s} {test-yner0z-41/latency-deployment-25-68ddb4c8f8-gdqbw 2.213082302s} {test-yner0z-37/latency-deployment-36-559865f56b-qb2s6 2.215911576s} {test-yner0z-21/latency-deployment-66-54fc459f98-5v2wk 2.216463103s} {test-yner0z-13/latency-deployment-74-8496c5d658-bkbjb 2.226807483s} {test-yner0z-21/latency-deployment-76-58594c695d-b2xv5 2.227711289s} {test-yner0z-21/latency-deployment-91-8848bbb-k67lm 2.228328915s} {test-yner0z-40/latency-deployment-45-767cf58fd6-thtgc 2.229743246s} {test-yner0z-10/latency-deployment-60-d487c8cfd-xxtxf 2.231480293s} {test-yner0z-41/latency-deployment-65-5d597c5f78-kg54s 2.231768605s} {test-yner0z-50/latency-deployment-82-7b4485f95d-ssjd8 2.233666508s} {test-yner0z-41/latency-deployment-55-9cc89d97d-7gp5s 2.235250145s} {test-yner0z-19/latency-deployment-67-6dc5d95cc4-wm4kx 2.238882888s} {test-yner0z-34/latency-deployment-47-6c894fbf7d-stgjp 2.241748138s} {test-yner0z-2/latency-deployment-92-6dfbc65cc4-nnn2p 2.24216037s} {test-yner0z-34/latency-deployment-77-68db9cd49d-v97gr 2.245574153s} {test-yner0z-32/latency-deployment-43-5f6bfc6678-rxgfr 2.249932557s} {test-yner0z-31/latency-deployment-23-f99dcd8bb-8kz8k 2.251523522s} {test-yner0z-35/latency-deployment-32-df6c6db6-nk8df 2.257106783s} {test-yner0z-3/latency-deployment-57-67f98bb958-5qbsb 2.259825323s} {test-yner0z-27/latency-deployment-99-7b99559cdb-vzjk7 2.259951213s} {test-yner0z-47/latency-deployment-78-64d498b88b-bh7gh 2.261982576s} {test-yner0z-25/latency-deployment-95-5df95cd7f8-s4lh8 2.265138267s} {test-yner0z-22/latency-deployment-66-54fc459f98-c6g86 2.267628679s} {test-yner0z-20/latency-deployment-17-5d545c949d-q59q5 2.269350476s} {test-yner0z-34/latency-deployment-37-79f74676f4-dlchk 2.2727669s} {test-yner0z-38/latency-deployment-66-54fc459f98-p4qqq 2.273124117s} {test-yner0z-14/latency-deployment-67-6dc5d95cc4-98mtc 2.274121744s} {test-yner0z-28/latency-deployment-89-65cdbbb946-vwppr 2.279609854s} {test-yner0z-44/latency-deployment-69-857847754-c588v 2.280867781s} {test-yner0z-13/latency-deployment-79-6d958d68d6-fxqj4 2.285574549s} {test-yner0z-7/latency-deployment-36-559865f56b-wgp4c 2.287784399s} {test-yner0z-28/latency-deployment-94-5bcfd47c98-r496z 2.288451844s} {test-yner0z-36/latency-deployment-15-575d7b8566-czgf4 2.291114734s} {test-yner0z-39/latency-deployment-85-5fb5949dc4-b8rhg 2.293069403s} {test-yner0z-28/latency-deployment-74-8496c5d658-cslg4 2.294031206s} {test-yner0z-13/latency-deployment-24-786664884b-4hzvg 2.296408343s} {test-yner0z-3/latency-deployment-72-655775c5d6-tp2j5 2.304692597s} {test-yner0z-35/latency-deployment-27-66dcbb49c4-6wbbk 2.311138059s} {test-yner0z-38/latency-deployment-11-5db58f7798-g27j5 2.314608978s} {test-yner0z-18/latency-deployment-81-686d774d98-tptkz 2.315353799s} {test-yner0z-44/latency-deployment-99-7b99559cdb-m2g87 2.319503271s} {test-yner0z-32/latency-deployment-53-554cc99fc4-pvxq4 2.325359364s} {test-yner0z-35/latency-deployment-52-6cc844766-w4st8 2.327431614s} {test-yner0z-38/latency-deployment-71-59565f786b-9tl5k 2.329190492s} {test-yner0z-20/latency-deployment-2-54c4fffc66-96xbx 2.332590148s} {test-yner0z-33/latency-deployment-86-6d459479f4-b9qqr 2.338448893s} {test-yner0z-28/latency-deployment-68-986df55bb-2wwzg 2.36111754s} {test-yner0z-36/latency-deployment-56-68996d4c7d-f959l 2.37798054s} {test-yner0z-47/latency-deployment-72-655775c5d6-j7rm4 2.381748298s} {test-yner0z-41/latency-deployment-94-5bcfd47c98-6c2vd 2.393889711s} {test-yner0z-34/latency-deployment-17-5d545c949d-drqqd 2.423109701s} {test-yner0z-4/latency-deployment-22-56768586b-59jlv 2.466425752s} {test-yner0z-3/latency-deployment-47-6c894fbf7d-5nqbf 2.480033513s}]
... skipping 487 lines ...
I0908 16:15:19.152668  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(saturation-deployment-0): Pods: 1924 out of 0 created, 1924 running (1924 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 52 terminating, 0 unknown, 0 runningButNotReady 
I0908 16:15:24.185434  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(saturation-deployment-0): Pods: 1420 out of 0 created, 1420 running (1420 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 57 terminating, 0 unknown, 0 runningButNotReady 
I0908 16:15:29.206456  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(saturation-deployment-0): Pods: 914 out of 0 created, 914 running (914 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 61 terminating, 0 unknown, 0 runningButNotReady 
I0908 16:15:34.218354  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(saturation-deployment-0): Pods: 412 out of 0 created, 412 running (412 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 53 terminating, 0 unknown, 0 runningButNotReady 
I0908 16:15:39.219395  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(saturation-deployment-0): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 3 terminating, 0 unknown, 0 runningButNotReady 
I0908 16:15:44.220528  283137 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-yner0z-50), controlledBy(saturation-deployment-0): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 16:15:44.220717  283137 wait_for_controlled_pods.go:320] WaitForControlledPodsRunning: running 0, deleted 50, timeout: 0, failed: 0
I0908 16:15:44.220739  283137 wait_for_controlled_pods.go:325] WaitForControlledPodsRunning: maxDuration=24m57.055509597s, operationTimeout=2h8m0s, ratio=0.19
I0908 16:15:44.220757  283137 wait_for_controlled_pods.go:339] WaitForControlledPodsRunning: 0/0 Deployments are running with all pods
I0908 16:15:44.220895  283137 simple_test_executor.go:171] Step "[step: 13] Waiting for saturation pods to be deleted" ended
I0908 16:15:44.220981  283137 simple_test_executor.go:149] Step "[step: 14] Collecting measurements" started
I0908 16:15:44.221107  283137 probes.go:115] InClusterNetworkLatency: Probes cannot work in kubemark, skipping the measurement!
I0908 16:15:44.221196  283137 prometheus_measurement.go:91] APIResponsivenessPrometheus gathering results
... skipping 91 lines ...
Specify --start=118393 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes to GCS directly at 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-gce-scale-scheduler/1567815505006301184' using logexporter
namespace/logexporter created
secret/google-service-account created
daemonset.apps/logexporter created
Listing marker files (gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-gce-scale-scheduler/1567815505006301184/logexported-nodes-registry) for successful nodes...
CommandException: One or more URLs matched no objects.
... skipping 195 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Detecting nodes in the cluster
INSTANCE_GROUPS=kubemark-5000-minion-group
NODE_NAMES=kubemark-5000-minion-group-0xb0 kubemark-5000-minion-group-1jmz kubemark-5000-minion-group-25r3 kubemark-5000-minion-group-2l9h kubemark-5000-minion-group-2nvk kubemark-5000-minion-group-2nz9 kubemark-5000-minion-group-2rlx kubemark-5000-minion-group-30ph kubemark-5000-minion-group-38j3 kubemark-5000-minion-group-52f7 kubemark-5000-minion-group-5450 kubemark-5000-minion-group-5h9b kubemark-5000-minion-group-63w4 kubemark-5000-minion-group-684b kubemark-5000-minion-group-6crt kubemark-5000-minion-group-7564 kubemark-5000-minion-group-7dgw kubemark-5000-minion-group-7sfz kubemark-5000-minion-group-7tgs kubemark-5000-minion-group-8ddf kubemark-5000-minion-group-8hjn kubemark-5000-minion-group-8xqq kubemark-5000-minion-group-91hk kubemark-5000-minion-group-bg4w kubemark-5000-minion-group-bj8m kubemark-5000-minion-group-btth kubemark-5000-minion-group-bx4x kubemark-5000-minion-group-dfs0 kubemark-5000-minion-group-dsgq kubemark-5000-minion-group-f13d kubemark-5000-minion-group-f2p3 kubemark-5000-minion-group-fdwz kubemark-5000-minion-group-fj8h kubemark-5000-minion-group-g3hx kubemark-5000-minion-group-grqx kubemark-5000-minion-group-h360 kubemark-5000-minion-group-hjcd kubemark-5000-minion-group-hsr9 kubemark-5000-minion-group-k01n kubemark-5000-minion-group-k5k4 kubemark-5000-minion-group-k7gg kubemark-5000-minion-group-kbvx kubemark-5000-minion-group-kdmf kubemark-5000-minion-group-kjx5 kubemark-5000-minion-group-knkn kubemark-5000-minion-group-ljn4 kubemark-5000-minion-group-lt02 kubemark-5000-minion-group-m0b9 kubemark-5000-minion-group-m0bc kubemark-5000-minion-group-m7s8 kubemark-5000-minion-group-mg0g kubemark-5000-minion-group-nf9w kubemark-5000-minion-group-nk6k kubemark-5000-minion-group-npvv kubemark-5000-minion-group-nsb5 kubemark-5000-minion-group-p1j6 kubemark-5000-minion-group-p9dn kubemark-5000-minion-group-pb55 kubemark-5000-minion-group-ptcv kubemark-5000-minion-group-qkhx kubemark-5000-minion-group-r9cl kubemark-5000-minion-group-rw9f kubemark-5000-minion-group-s21d kubemark-5000-minion-group-sg72 kubemark-5000-minion-group-sr5z kubemark-5000-minion-group-swhh kubemark-5000-minion-group-t5c2 kubemark-5000-minion-group-tcp0 kubemark-5000-minion-group-tf3m kubemark-5000-minion-group-tmlc kubemark-5000-minion-group-tp49 kubemark-5000-minion-group-tq2x kubemark-5000-minion-group-v6rj kubemark-5000-minion-group-vpwt kubemark-5000-minion-group-vvk2 kubemark-5000-minion-group-w9k4 kubemark-5000-minion-group-wbg0 kubemark-5000-minion-group-wl37 kubemark-5000-minion-group-wsl5 kubemark-5000-minion-group-x0mh kubemark-5000-minion-group-xxrp kubemark-5000-minion-group-z4g1 kubemark-5000-minion-group-zthx kubemark-5000-minion-heapster
WINDOWS_INSTANCE_GROUPS=
WINDOWS_NODE_NAMES=
Uploading '/tmp/tmp.GURZP1QuO1/logs' to 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-gce-scale-scheduler/1567815505006301184'
... skipping 179 lines ...
kubemark-5000-kubemark-master-etcd
kubemark-5000-kubemark-master-https
kubemark-5000-kubemark-minion-all
kubemark-5000-kubemark-minion-http-alt
kubemark-5000-kubemark-minion-nodeports
Deleting custom subnet...
ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
 - The subnetwork resource 'projects/k8s-infra-e2e-boskos-scale-23/regions/us-east1/subnetworks/kubemark-5000-custom-subnet' is already being used by 'projects/k8s-infra-e2e-boskos-scale-23/zones/us-east1-b/instances/kubemark-5000-kubemark-master'

ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
 - The network resource 'projects/k8s-infra-e2e-boskos-scale-23/global/networks/kubemark-5000' is already being used by 'projects/k8s-infra-e2e-boskos-scale-23/global/firewalls/kubemark-5000-kubemark-default-internal-master'

Failed to delete network 'kubemark-5000'. Listing firewall-rules:
NAME                                            NETWORK        DIRECTION  PRIORITY  ALLOW                                       DENY  DISABLED
kubemark-5000-kubemark-default-internal-master  kubemark-5000  INGRESS    1000      tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp        False
kubemark-5000-kubemark-default-internal-node    kubemark-5000  INGRESS    1000      tcp:1-65535,udp:1-65535,icmp                      False
kubemark-5000-kubemark-master-etcd              kubemark-5000  INGRESS    1000      tcp:2380,tcp:2381                                 False
kubemark-5000-kubemark-master-https             kubemark-5000  INGRESS    1000      tcp:443                                           False
kubemark-5000-kubemark-minion-all               kubemark-5000  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp                          False
... skipping 13 lines ...
scp: /var/log/glbc.log*: No such file or directory
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Skipping dumping of node logs
Detecting nodes in the cluster
WARNING: The following filter keys were not present in any resource : name, zone
WARNING: The following filter keys were not present in any resource : name, zone
INSTANCE_GROUPS=
NODE_NAMES=kubemark-5000-minion-heapster
... skipping 65 lines ...
WARNING: The following filter keys were not present in any resource : name, zone
INSTANCE_GROUPS=
NODE_NAMES=kubemark-5000-kubemark-minion-heapster
Bringing down cluster
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-23/global/instanceTemplates/kubemark-5000-kubemark-minion-template].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-23/global/instanceTemplates/kubemark-5000-kubemark-windows-node-template].
Failed to execute 'curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members/$(curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members -XGET | sed 's/{\"id/\n/g' | grep kubemark-5000-kubemark-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on kubemark-5000-kubemark-master despite 5 attempts
Last attempt failed with: ssh: connect to host 34.139.38.14 port 22: Connection timed out


Recommendation: To check for possible causes of SSH connectivity issues and get
recommendations, rerun the ssh command with the --troubleshoot option.

gcloud compute ssh kubemark-5000-kubemark-master --project=k8s-infra-e2e-boskos-scale-23 --zone=us-east1-b --troubleshoot

Or, to investigate an IAP tunneling issue:

gcloud compute ssh kubemark-5000-kubemark-master --project=k8s-infra-e2e-boskos-scale-23 --zone=us-east1-b --troubleshoot --tunnel-through-iap

ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Removing etcd replica, name: kubemark-5000-kubemark-master, port: 2379, result: 1
Failed to execute 'curl -s  http://127.0.0.1:4002/v2/members/$(curl -s  http://127.0.0.1:4002/v2/members -XGET | sed 's/{\"id/\n/g' | grep kubemark-5000-kubemark-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on kubemark-5000-kubemark-master despite 5 attempts
Last attempt failed with: ssh: connect to host 34.139.38.14 port 22: Connection timed out


Recommendation: To check for possible causes of SSH connectivity issues and get
recommendations, rerun the ssh command with the --troubleshoot option.

gcloud compute ssh kubemark-5000-kubemark-master --project=k8s-infra-e2e-boskos-scale-23 --zone=us-east1-b --troubleshoot

Or, to investigate an IAP tunneling issue:

gcloud compute ssh kubemark-5000-kubemark-master --project=k8s-infra-e2e-boskos-scale-23 --zone=us-east1-b --troubleshoot --tunnel-through-iap

ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Removing etcd replica, name: kubemark-5000-kubemark-master, port: 4002, result: 1
Updated [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-23/zones/us-east1-b/instances/kubemark-5000-kubemark-master].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-23/zones/us-east1-b/instances/kubemark-5000-kubemark-master].
WARNING: The following filter keys were not present in any resource : name
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-23/global/firewalls/kubemark-5000-kubemark-master-https].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-23/global/firewalls/kubemark-5000-kubemark-master-etcd].
... skipping 32 lines ...