This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 128 succeeded
Started2022-09-06 10:01
Elapsed7h35m
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 128 Passed Tests

Error lines from build-log.txt

... skipping 359 lines ...
NAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
kubemark-5000-master  us-east1-b  n1-standard-8               10.40.0.3    34.139.139.178  RUNNING
Setting kubemark-5000-master's aliases to 'pods-default:10.64.0.0/24;10.40.0.2/32' (added 10.40.0.2)
Updating network interface [nic0] of instance [kubemark-5000-master]...
..........done.
Updated [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-14/zones/us-east1-b/instances/kubemark-5000-master].
Failed to execute 'sudo /bin/bash /home/kubernetes/bin/kube-master-internal-route.sh' on kubemark-5000-master despite 5 attempts
Last attempt failed with: /bin/bash: /home/kubernetes/bin/kube-master-internal-route.sh: No such file or directory
Creating firewall...
..Created [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-14/global/firewalls/kubemark-5000-minion-all].
NAME                      NETWORK        DIRECTION  PRIORITY  ALLOW                     DENY  DISABLED
kubemark-5000-minion-all  kubemark-5000  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp        False
done.
Creating nodes.
... skipping 33 lines ...
Looking for address 'kubemark-5000-master-ip'
Looking for address 'kubemark-5000-master-internal-ip'
Using master: kubemark-5000-master (external IP: 34.139.139.178; internal IP: 10.40.0.2)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-14_kubemark-5000" set.
User "k8s-infra-e2e-boskos-scale-14_kubemark-5000" set.
Context "k8s-infra-e2e-boskos-scale-14_kubemark-5000" created.
Switched to context "k8s-infra-e2e-boskos-scale-14_kubemark-5000".
... skipping 195 lines ...
kubemark-5000-minion-group-xjcl   Ready                         <none>   59s    v1.26.0-alpha.0.380+67bde9a1023d18
kubemark-5000-minion-group-z6xs   Ready                         <none>   59s    v1.26.0-alpha.0.380+67bde9a1023d18
kubemark-5000-minion-heapster     Ready                         <none>   73s    v1.26.0-alpha.0.380+67bde9a1023d18
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation encountered some problems, but cluster should be in working order
...ignoring non-fatal errors in validate-cluster
Done, listing cluster services:

Kubernetes control plane is running at https://34.139.139.178
GLBCDefaultBackend is running at https://34.139.139.178/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://34.139.139.178/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://34.139.139.178/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
... skipping 228 lines ...
Looking for address 'kubemark-5000-kubemark-master-ip'
Looking for address 'kubemark-5000-kubemark-master-internal-ip'
Using master: kubemark-5000-kubemark-master (external IP: 34.148.195.21; internal IP: 10.40.3.216)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-scale-14_kubemark-5000-kubemark" set.
User "k8s-infra-e2e-boskos-scale-14_kubemark-5000-kubemark" set.
Context "k8s-infra-e2e-boskos-scale-14_kubemark-5000-kubemark" created.
Switched to context "k8s-infra-e2e-boskos-scale-14_kubemark-5000-kubemark".
... skipping 20 lines ...
Found 1 node(s).
NAME                            STATUS                     ROLES    AGE   VERSION
kubemark-5000-kubemark-master   Ready,SchedulingDisabled   <none>   23s   v1.26.0-alpha.0.380+67bde9a1023d18
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation succeeded
Done, listing cluster services:
... skipping 6968 lines ...
I0906 10:51:09.341655  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-04wb2e-16), controlledBy(saturation-deployment-0): Pods: 3000 out of 3000 created, 3000 running (3000 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 10:51:09.683582  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-04wb2e-48), controlledBy(saturation-deployment-0): Pods: 3000 out of 3000 created, 2999 running (2999 updated), 1 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 10:51:10.218212  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-04wb2e-26), controlledBy(saturation-deployment-0): Pods: 3000 out of 3000 created, 2995 running (2995 updated), 5 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 10:51:13.156194  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-04wb2e-40), controlledBy(saturation-deployment-0): Pods: 3000 out of 3000 created, 3000 running (3000 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 10:51:14.734462  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-04wb2e-48), controlledBy(saturation-deployment-0): Pods: 3000 out of 3000 created, 3000 running (3000 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 10:51:15.269459  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-04wb2e-26), controlledBy(saturation-deployment-0): Pods: 3000 out of 3000 created, 3000 running (3000 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 10:51:15.269540  283114 wait_for_controlled_pods.go:320] WaitForControlledPodsRunning: running 50, deleted 0, timeout: 0, failed: 0
I0906 10:51:15.269597  283114 wait_for_controlled_pods.go:325] WaitForControlledPodsRunning: maxDuration=25m7.585392906s, operationTimeout=2h8m0s, ratio=0.20
I0906 10:51:15.269640  283114 wait_for_controlled_pods.go:339] WaitForControlledPodsRunning: 50/50 Deployments are running with all pods
I0906 10:51:15.269678  283114 simple_test_executor.go:171] Step "[step: 04] Waiting for saturation pods to be running" ended
I0906 10:51:15.269742  283114 simple_test_executor.go:149] Step "[step: 05] Collecting saturation pod measurements" started
I0906 10:51:15.269775  283114 scheduling_throughput.go:154] SchedulingThroughput: gathering data
I0906 10:51:15.269924  283114 pod_startup_latency.go:226] PodStartupLatency: labelSelector(group = saturation): gathering pod startup latency measurement...
... skipping 63825 lines ...
I0906 15:32:16.424681  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(latency-deployment-94): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 15:32:16.627475  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(latency-deployment-95): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 15:32:16.826278  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(latency-deployment-96): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 15:32:17.026897  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(latency-deployment-97): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 15:32:17.227301  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(latency-deployment-98): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 15:32:17.429089  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(latency-deployment-99): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 15:32:19.649171  283114 wait_for_controlled_pods.go:320] WaitForControlledPodsRunning: running 5000, deleted 0, timeout: 0, failed: 0
I0906 15:32:19.649212  283114 wait_for_controlled_pods.go:325] WaitForControlledPodsRunning: maxDuration=5.003298971s, operationTimeout=15m0s, ratio=0.01
I0906 15:32:19.649230  283114 wait_for_controlled_pods.go:339] WaitForControlledPodsRunning: 5000/5000 Deployments are running with all pods
I0906 15:32:19.649255  283114 simple_test_executor.go:171] Step "[step: 08] Waiting for latency pods to be running" ended
I0906 15:32:19.649274  283114 simple_test_executor.go:149] Step "[step: 09] Deleting latency pods" started
I0906 15:32:19.697593  283114 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-fik7qv-1), controlledBy(latency-deployment-0): starting with timeout: 14m59.999994945s
I0906 15:32:19.897222  283114 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-fik7qv-1), controlledBy(latency-deployment-1): starting with timeout: 14m59.999989972s
... skipping 4995 lines ...
I0906 15:49:02.025804  283114 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(latency-deployment-97): starting with timeout: 14m59.999991665s
I0906 15:49:02.223983  283114 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(latency-deployment-98): starting with timeout: 14m59.999994256s
I0906 15:49:02.427815  283114 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(latency-deployment-99): starting with timeout: 14m59.99999337s
I0906 15:49:02.600597  283114 simple_test_executor.go:171] Step "[step: 09] Deleting latency pods" ended
I0906 15:49:02.600693  283114 simple_test_executor.go:149] Step "[step: 10] Waiting for latency pods to be deleted" started
I0906 15:49:02.600774  283114 wait_for_controlled_pods.go:247] WaitForControlledPodsRunning: waiting for controlled pods measurement...
I0906 15:49:07.638667  283114 wait_for_controlled_pods.go:320] WaitForControlledPodsRunning: running 0, deleted 5000, timeout: 0, failed: 0
I0906 15:49:07.638714  283114 wait_for_controlled_pods.go:325] WaitForControlledPodsRunning: maxDuration=5.002165335s, operationTimeout=15m0s, ratio=0.01
I0906 15:49:07.638731  283114 wait_for_controlled_pods.go:339] WaitForControlledPodsRunning: 0/0 Deployments are running with all pods
I0906 15:49:07.638767  283114 simple_test_executor.go:171] Step "[step: 10] Waiting for latency pods to be deleted" ended
I0906 15:49:07.638796  283114 simple_test_executor.go:149] Step "[step: 11] Collecting pod startup latency" started
I0906 15:49:07.638863  283114 pod_startup_latency.go:226] PodStartupLatency: labelSelector(group = latency): gathering pod startup latency measurement...
I0906 15:49:12.569745  283114 phase_latency.go:141] PodStartupLatency: 100 worst run_to_watch latencies: [{test-fik7qv-49/latency-deployment-40-7bd65699c4-qjnbd 1.832359173s} {test-fik7qv-5/latency-deployment-23-f99dcd8bb-jjh74 1.833184164s} {test-fik7qv-11/latency-deployment-37-79f74676f4-xpcvg 1.833257284s} {test-fik7qv-43/latency-deployment-27-66dcbb49c4-88r44 1.834570847s} {test-fik7qv-29/latency-deployment-81-686d774d98-znzfx 1.836671369s} {test-fik7qv-36/latency-deployment-89-65cdbbb946-kkxwd 1.837637597s} {test-fik7qv-2/latency-deployment-94-5bcfd47c98-gq4kn 1.83987917s} {test-fik7qv-48/latency-deployment-71-59565f786b-bhwws 1.83998531s} {test-fik7qv-1/latency-deployment-99-7b99559cdb-4gbg5 1.840360519s} {test-fik7qv-47/latency-deployment-56-68996d4c7d-rrs2q 1.840472374s} {test-fik7qv-23/latency-deployment-98-5f8496cdfd-f9q4t 1.841113109s} {test-fik7qv-32/latency-deployment-85-5fb5949dc4-pwt8s 1.842098057s} {test-fik7qv-23/latency-deployment-28-68887ffbdb-tlm5r 1.842254468s} {test-fik7qv-25/latency-deployment-67-6dc5d95cc4-z5lxb 1.843101054s} {test-fik7qv-3/latency-deployment-24-786664884b-tm98j 1.843270272s} {test-fik7qv-47/latency-deployment-66-54fc459f98-45hxf 1.843550361s} {test-fik7qv-13/latency-deployment-16-7bdc748758-cfk4h 1.846977823s} {test-fik7qv-22/latency-deployment-88-56bc9594c8-pcs8f 1.847415479s} {test-fik7qv-26/latency-deployment-67-6dc5d95cc4-jtdkf 1.847612475s} {test-fik7qv-41/latency-deployment-28-68887ffbdb-wbdwz 1.849211563s} {test-fik7qv-14/latency-deployment-61-57dfb8fddb-ft58z 1.849577481s} {test-fik7qv-29/latency-deployment-16-7bdc748758-xggsp 1.849834703s} {test-fik7qv-39/latency-deployment-63-57bf85cbbb-zhpw6 1.850090458s} {test-fik7qv-10/latency-deployment-72-655775c5d6-msr59 1.85103064s} {test-fik7qv-33/latency-deployment-30-84c754fd58-snz8f 1.852574685s} {test-fik7qv-23/latency-deployment-23-f99dcd8bb-925wt 1.85310507s} {test-fik7qv-29/latency-deployment-91-8848bbb-752lb 1.856035443s} {test-fik7qv-22/latency-deployment-43-5f6bfc6678-h8nj8 1.859171229s} {test-fik7qv-33/latency-deployment-20-566f5ccdf8-frgdd 1.859667387s} {test-fik7qv-41/latency-deployment-68-986df55bb-hlhps 1.859671602s} {test-fik7qv-30/latency-deployment-16-7bdc748758-qfgxt 1.860184322s} {test-fik7qv-29/latency-deployment-31-779d89cd6b-mpvdk 1.861223657s} {test-fik7qv-43/latency-deployment-87-7868966fd-lqnqj 1.861588147s} {test-fik7qv-40/latency-deployment-48-5669fb4654-4x58w 1.863077711s} {test-fik7qv-36/latency-deployment-44-56dbdcc57d-g9d9x 1.864170358s} {test-fik7qv-46/latency-deployment-61-57dfb8fddb-t2mzh 1.864175934s} {test-fik7qv-39/latency-deployment-68-986df55bb-btt4g 1.866071093s} {test-fik7qv-50/latency-deployment-75-7cfbcc5bd6-spp8p 1.867599542s} {test-fik7qv-5/latency-deployment-58-98c85d7cd-7qxf8 1.867899892s} {test-fik7qv-3/latency-deployment-39-7497d5b6-wj4p6 1.868094861s} {test-fik7qv-6/latency-deployment-23-f99dcd8bb-mnkqq 1.868104588s} {test-fik7qv-19/latency-deployment-64-584d5bbff4-fhm52 1.86859122s} {test-fik7qv-16/latency-deployment-95-5df95cd7f8-xsd25 1.869217597s} {test-fik7qv-40/latency-deployment-68-986df55bb-4g92h 1.86921923s} {test-fik7qv-14/latency-deployment-26-8bcd76646-sj2cx 1.872892146s} {test-fik7qv-16/latency-deployment-5-6f85ddbf7b-kztpk 1.872959606s} {test-fik7qv-23/latency-deployment-73-cb8fd5d78-n5cch 1.874918805s} {test-fik7qv-11/latency-deployment-2-54c4fffc66-j285r 1.877274298s} {test-fik7qv-2/latency-deployment-24-786664884b-nlvpd 1.87751701s} {test-fik7qv-45/latency-deployment-37-79f74676f4-cnnxr 1.877894574s} {test-fik7qv-40/latency-deployment-23-f99dcd8bb-5df82 1.879923762s} {test-fik7qv-10/latency-deployment-87-7868966fd-xbnrg 1.881326857s} {test-fik7qv-44/latency-deployment-2-54c4fffc66-b8rdz 1.886357421s} {test-fik7qv-34/latency-deployment-75-7cfbcc5bd6-d7mcj 1.888468727s} {test-fik7qv-44/latency-deployment-7-854597b74d-7wtdj 1.888755394s} {test-fik7qv-4/latency-deployment-24-786664884b-m6p9z 1.891417058s} {test-fik7qv-40/latency-deployment-3-f455d494d-ksgxb 1.893528126s} {test-fik7qv-27/latency-deployment-42-684b68cd-cv5fx 1.897634011s} {test-fik7qv-48/latency-deployment-51-675d9cb57d-9jv28 1.898855052s} {test-fik7qv-17/latency-deployment-85-5fb5949dc4-xttj7 1.902372144s} {test-fik7qv-6/latency-deployment-63-57bf85cbbb-l2974 1.902407152s} {test-fik7qv-13/latency-deployment-11-5db58f7798-xtz8n 1.905045501s} {test-fik7qv-16/latency-deployment-80-7c8446fb46-749jq 1.906682387s} {test-fik7qv-36/latency-deployment-79-6d958d68d6-529qr 1.90822335s} {test-fik7qv-23/latency-deployment-53-554cc99fc4-792zl 1.908550692s} {test-fik7qv-50/latency-deployment-10-f9b7f9d94-7d4qb 1.908753719s} {test-fik7qv-6/latency-deployment-88-56bc9594c8-2d2nk 1.909880879s} {test-fik7qv-16/latency-deployment-55-9cc89d97d-wvg42 1.910215982s} {test-fik7qv-41/latency-deployment-43-5f6bfc6678-wlcvd 1.91200154s} {test-fik7qv-33/latency-deployment-35-6c76f759db-cshzc 1.912458754s} {test-fik7qv-47/latency-deployment-31-779d89cd6b-5hw69 1.918280632s} {test-fik7qv-3/latency-deployment-9-75d56b8f7b-fdw9w 1.918568546s} {test-fik7qv-13/latency-deployment-81-686d774d98-b9k8k 1.919407754s} {test-fik7qv-14/latency-deployment-86-6d459479f4-l7v2m 1.920350121s} {test-fik7qv-17/latency-deployment-90-55bc878b-244kb 1.921045528s} {test-fik7qv-4/latency-deployment-4-5448795b78-spln6 1.924294582s} {test-fik7qv-24/latency-deployment-28-68887ffbdb-qztjs 1.929948907s} {test-fik7qv-37/latency-deployment-19-7666f48cb6-xfrg4 1.930081886s} {test-fik7qv-7/latency-deployment-78-64d498b88b-bsd7d 1.93023472s} {test-fik7qv-17/latency-deployment-20-566f5ccdf8-g4522 1.932109328s} {test-fik7qv-16/latency-deployment-90-55bc878b-gx9gp 1.932519622s} {test-fik7qv-48/latency-deployment-76-58594c695d-79tpc 1.934323294s} {test-fik7qv-19/latency-deployment-94-5bcfd47c98-mtlnr 1.934604307s} {test-fik7qv-41/latency-deployment-38-58cc8dd95d-n27qg 1.936655113s} {test-fik7qv-14/latency-deployment-96-78f767bfc4-v7vgf 1.939855869s} {test-fik7qv-45/latency-deployment-22-56768586b-wk8hh 1.940864152s} {test-fik7qv-20/latency-deployment-79-6d958d68d6-pf5zd 1.941742015s} {test-fik7qv-50/latency-deployment-65-5d597c5f78-l6vrr 1.942427354s} {test-fik7qv-27/latency-deployment-32-df6c6db6-khq8p 1.943633164s} {test-fik7qv-10/latency-deployment-22-56768586b-7xt6w 1.944925036s} {test-fik7qv-14/latency-deployment-91-8848bbb-4j7cv 1.946118339s} {test-fik7qv-38/latency-deployment-9-75d56b8f7b-rvz5d 1.950009279s} {test-fik7qv-40/latency-deployment-98-5f8496cdfd-cwnts 1.953696241s} {test-fik7qv-21/latency-deployment-34-6b4b55fd7d-vbzpq 1.954684936s} {test-fik7qv-34/latency-deployment-5-6f85ddbf7b-crgfb 1.962177582s} {test-fik7qv-26/latency-deployment-97-55fb844d6-tmsdd 1.968624703s} {test-fik7qv-37/latency-deployment-89-65cdbbb946-cffwq 1.968668856s} {test-fik7qv-31/latency-deployment-31-779d89cd6b-wb4cj 1.969477372s} {test-fik7qv-7/latency-deployment-63-57bf85cbbb-nh55v 1.977809176s} {test-fik7qv-5/latency-deployment-88-56bc9594c8-5lmqf 2.807919889s}]
... skipping 489 lines ...
I0906 16:13:50.645319  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(saturation-deployment-0): Pods: 2234 out of 0 created, 2234 running (2234 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 62 terminating, 0 unknown, 0 runningButNotReady 
I0906 16:13:55.679926  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(saturation-deployment-0): Pods: 1731 out of 0 created, 1731 running (1731 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 56 terminating, 0 unknown, 0 runningButNotReady 
I0906 16:14:00.704805  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(saturation-deployment-0): Pods: 1229 out of 0 created, 1229 running (1229 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 58 terminating, 0 unknown, 0 runningButNotReady 
I0906 16:14:05.718895  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(saturation-deployment-0): Pods: 726 out of 0 created, 726 running (726 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 60 terminating, 0 unknown, 0 runningButNotReady 
I0906 16:14:10.723902  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(saturation-deployment-0): Pods: 226 out of 0 created, 226 running (226 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 59 terminating, 0 unknown, 0 runningButNotReady 
I0906 16:14:15.724421  283114 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-fik7qv-50), controlledBy(saturation-deployment-0): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0906 16:14:15.724518  283114 wait_for_controlled_pods.go:320] WaitForControlledPodsRunning: running 0, deleted 50, timeout: 0, failed: 0
I0906 16:14:15.724556  283114 wait_for_controlled_pods.go:325] WaitForControlledPodsRunning: maxDuration=24m53.240847423s, operationTimeout=2h8m0s, ratio=0.19
I0906 16:14:15.724589  283114 wait_for_controlled_pods.go:339] WaitForControlledPodsRunning: 0/0 Deployments are running with all pods
I0906 16:14:15.724643  283114 simple_test_executor.go:171] Step "[step: 13] Waiting for saturation pods to be deleted" ended
I0906 16:14:15.724727  283114 simple_test_executor.go:149] Step "[step: 14] Collecting measurements" started
I0906 16:14:15.724892  283114 prometheus_measurement.go:91] APIResponsivenessPrometheusSimple gathering results
I0906 16:14:15.724973  283114 probes.go:115] InClusterNetworkLatency: Probes cannot work in kubemark, skipping the measurement!
... skipping 91 lines ...
Specify --start=76646 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes to GCS directly at 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-gce-scale-scheduler/1567090697473888256' using logexporter
namespace/logexporter created
secret/google-service-account created
daemonset.apps/logexporter created
Listing marker files (gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-gce-scale-scheduler/1567090697473888256/logexported-nodes-registry) for successful nodes...
CommandException: One or more URLs matched no objects.
... skipping 195 lines ...
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/kubelet-hollow-node-*.log*: No such file or directory
scp: /var/log/kubeproxy-hollow-node-*.log*: No such file or directory
scp: /var/log/npd-hollow-node-*.log*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Detecting nodes in the cluster
INSTANCE_GROUPS=kubemark-5000-minion-group
NODE_NAMES=kubemark-5000-minion-group-03ht kubemark-5000-minion-group-176k kubemark-5000-minion-group-233j kubemark-5000-minion-group-2993 kubemark-5000-minion-group-2d5p kubemark-5000-minion-group-2lp0 kubemark-5000-minion-group-33d3 kubemark-5000-minion-group-36dv kubemark-5000-minion-group-41w8 kubemark-5000-minion-group-48qc kubemark-5000-minion-group-4c8l kubemark-5000-minion-group-4mgx kubemark-5000-minion-group-5m45 kubemark-5000-minion-group-5msn kubemark-5000-minion-group-5t3n kubemark-5000-minion-group-6174 kubemark-5000-minion-group-662j kubemark-5000-minion-group-67dn kubemark-5000-minion-group-6j8c kubemark-5000-minion-group-74rx kubemark-5000-minion-group-8235 kubemark-5000-minion-group-8ssb kubemark-5000-minion-group-8ztq kubemark-5000-minion-group-906b kubemark-5000-minion-group-b3sd kubemark-5000-minion-group-b7kw kubemark-5000-minion-group-bhlm kubemark-5000-minion-group-bl1z kubemark-5000-minion-group-c699 kubemark-5000-minion-group-cqz7 kubemark-5000-minion-group-cz66 kubemark-5000-minion-group-dng4 kubemark-5000-minion-group-dsjh kubemark-5000-minion-group-dv50 kubemark-5000-minion-group-f3sf kubemark-5000-minion-group-fql2 kubemark-5000-minion-group-gt7c kubemark-5000-minion-group-gvr2 kubemark-5000-minion-group-gxbz kubemark-5000-minion-group-hlpt kubemark-5000-minion-group-hm1q kubemark-5000-minion-group-hr43 kubemark-5000-minion-group-jhcs kubemark-5000-minion-group-jknm kubemark-5000-minion-group-jqkl kubemark-5000-minion-group-jqs6 kubemark-5000-minion-group-k3lt kubemark-5000-minion-group-kh55 kubemark-5000-minion-group-kxdp kubemark-5000-minion-group-kxv3 kubemark-5000-minion-group-kzbb kubemark-5000-minion-group-l18j kubemark-5000-minion-group-lg0s kubemark-5000-minion-group-llz1 kubemark-5000-minion-group-lv00 kubemark-5000-minion-group-lxcz kubemark-5000-minion-group-lzwx kubemark-5000-minion-group-m29g kubemark-5000-minion-group-mb5s kubemark-5000-minion-group-mf1q kubemark-5000-minion-group-n1f6 kubemark-5000-minion-group-n320 kubemark-5000-minion-group-pjv9 kubemark-5000-minion-group-pm31 kubemark-5000-minion-group-pp66 kubemark-5000-minion-group-qc64 kubemark-5000-minion-group-qmbn kubemark-5000-minion-group-r11m kubemark-5000-minion-group-r6lx kubemark-5000-minion-group-rqlg kubemark-5000-minion-group-rsrf kubemark-5000-minion-group-rvfm kubemark-5000-minion-group-rz19 kubemark-5000-minion-group-t1g8 kubemark-5000-minion-group-t8w1 kubemark-5000-minion-group-tf68 kubemark-5000-minion-group-v17j kubemark-5000-minion-group-v4q7 kubemark-5000-minion-group-vfrv kubemark-5000-minion-group-w0k0 kubemark-5000-minion-group-w7s4 kubemark-5000-minion-group-xjcl kubemark-5000-minion-group-z6xs kubemark-5000-minion-heapster
WINDOWS_INSTANCE_GROUPS=
WINDOWS_NODE_NAMES=
Uploading '/tmp/tmp.ze9JtUnfQp/logs' to 'gs://k8s-infra-scalability-tests-logs/ci-kubernetes-kubemark-gce-scale-scheduler/1567090697473888256'
... skipping 179 lines ...
kubemark-5000-kubemark-master-etcd
kubemark-5000-kubemark-master-https
kubemark-5000-kubemark-minion-all
kubemark-5000-kubemark-minion-http-alt
kubemark-5000-kubemark-minion-nodeports
Deleting custom subnet...
ERROR: (gcloud.compute.networks.subnets.delete) Could not fetch resource:
 - The subnetwork resource 'projects/k8s-infra-e2e-boskos-scale-14/regions/us-east1/subnetworks/kubemark-5000-custom-subnet' is already being used by 'projects/k8s-infra-e2e-boskos-scale-14/zones/us-east1-b/instances/kubemark-5000-kubemark-master'

ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
 - The network resource 'projects/k8s-infra-e2e-boskos-scale-14/global/networks/kubemark-5000' is already being used by 'projects/k8s-infra-e2e-boskos-scale-14/global/firewalls/kubemark-5000-kubemark-minion-nodeports'

Failed to delete network 'kubemark-5000'. Listing firewall-rules:
NAME                                            NETWORK        DIRECTION  PRIORITY  ALLOW                                       DENY  DISABLED
kubemark-5000-kubemark-default-internal-master  kubemark-5000  INGRESS    1000      tcp:1-2379,tcp:2382-65535,udp:1-65535,icmp        False
kubemark-5000-kubemark-default-internal-node    kubemark-5000  INGRESS    1000      tcp:1-65535,udp:1-65535,icmp                      False
kubemark-5000-kubemark-master-etcd              kubemark-5000  INGRESS    1000      tcp:2380,tcp:2381                                 False
kubemark-5000-kubemark-master-https             kubemark-5000  INGRESS    1000      tcp:443                                           False
kubemark-5000-kubemark-minion-all               kubemark-5000  INGRESS    1000      tcp,udp,icmp,esp,ah,sctp                          False
... skipping 13 lines ...
scp: /var/log/glbc.log*: No such file or directory
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/konnectivity-server.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Skipping dumping of node logs
Detecting nodes in the cluster
WARNING: The following filter keys were not present in any resource : name, zone
WARNING: The following filter keys were not present in any resource : name, zone
INSTANCE_GROUPS=
NODE_NAMES=kubemark-5000-minion-heapster
... skipping 65 lines ...
WARNING: The following filter keys were not present in any resource : name, zone
INSTANCE_GROUPS=
NODE_NAMES=kubemark-5000-kubemark-minion-heapster
Bringing down cluster
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-14/global/instanceTemplates/kubemark-5000-kubemark-minion-template].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-14/global/instanceTemplates/kubemark-5000-kubemark-windows-node-template].
Failed to execute 'curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members/$(curl -s --cacert /etc/srv/kubernetes/pki/etcd-apiserver-ca.crt --cert /etc/srv/kubernetes/pki/etcd-apiserver-client.crt --key /etc/srv/kubernetes/pki/etcd-apiserver-client.key https://127.0.0.1:2379/v2/members -XGET | sed 's/{\"id/\n/g' | grep kubemark-5000-kubemark-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on kubemark-5000-kubemark-master despite 5 attempts
Last attempt failed with: ssh: connect to host 34.148.195.21 port 22: Connection timed out


Recommendation: To check for possible causes of SSH connectivity issues and get
recommendations, rerun the ssh command with the --troubleshoot option.

gcloud compute ssh kubemark-5000-kubemark-master --project=k8s-infra-e2e-boskos-scale-14 --zone=us-east1-b --troubleshoot

Or, to investigate an IAP tunneling issue:

gcloud compute ssh kubemark-5000-kubemark-master --project=k8s-infra-e2e-boskos-scale-14 --zone=us-east1-b --troubleshoot --tunnel-through-iap

ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Removing etcd replica, name: kubemark-5000-kubemark-master, port: 2379, result: 1
Failed to execute 'curl -s  http://127.0.0.1:4002/v2/members/$(curl -s  http://127.0.0.1:4002/v2/members -XGET | sed 's/{\"id/\n/g' | grep kubemark-5000-kubemark-master\" | cut -f 3 -d \") -XDELETE -L 2>/dev/null' on kubemark-5000-kubemark-master despite 5 attempts
Last attempt failed with: ssh: connect to host 34.148.195.21 port 22: Connection timed out


Recommendation: To check for possible causes of SSH connectivity issues and get
recommendations, rerun the ssh command with the --troubleshoot option.

gcloud compute ssh kubemark-5000-kubemark-master --project=k8s-infra-e2e-boskos-scale-14 --zone=us-east1-b --troubleshoot

Or, to investigate an IAP tunneling issue:

gcloud compute ssh kubemark-5000-kubemark-master --project=k8s-infra-e2e-boskos-scale-14 --zone=us-east1-b --troubleshoot --tunnel-through-iap

ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Removing etcd replica, name: kubemark-5000-kubemark-master, port: 4002, result: 1
Updated [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-14/zones/us-east1-b/instances/kubemark-5000-kubemark-master].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-14/zones/us-east1-b/instances/kubemark-5000-kubemark-master].
WARNING: The following filter keys were not present in any resource : name
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-14/global/firewalls/kubemark-5000-kubemark-master-https].
Deleted [https://www.googleapis.com/compute/v1/projects/k8s-infra-e2e-boskos-scale-14/global/firewalls/kubemark-5000-kubemark-minion-all].
... skipping 32 lines ...