Recent runs || View in Spyglass
PR | mborsz: [WIP] no statefulsets |
Result | FAILURE |
Tests | 1 failed / 12 succeeded |
Started | |
Elapsed | 32m40s |
Revision | 2b4d4641889eae277d1abfc70eed1a9fcf0a78fc |
Refs |
2208 |
job-version | v1.27.0-alpha.0.46+8f2371bcceff79 |
kubetest-version | v20221116-7c85504268 |
revision | v1.27.0-alpha.0.46+8f2371bcceff79 |
error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=100 --prometheus-scrape-node-exporter --provider=gce --report-dir=/logs/artifacts --testconfig=testing/load/config.yaml --testconfig=testing/huge-service/config.yaml --testconfig=testing/access-tokens/config.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/load_throughput.yaml: exit status 2
from junit_runner.xml
Filter through log files | View test history on testgrid
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest Extract
kubetest GetDeployer
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest list nodes
kubetest test setup
... skipping 410 lines ... Looking for address 'e2e-2208-62db2-master-ip' Looking for address 'e2e-2208-62db2-master-internal-ip' Using master: e2e-2208-62db2-master (external IP: 34.23.70.14; internal IP: 10.40.0.2) Waiting up to 300 seconds for cluster initialization. This will continually check to see if the API for kubernetes is reachable. This may time out if there was some uncaught error during start up. Kubernetes cluster created. Cluster "k8s-presubmit-scale-33_e2e-2208-62db2" set. User "k8s-presubmit-scale-33_e2e-2208-62db2" set. Context "k8s-presubmit-scale-33_e2e-2208-62db2" created. Switched to context "k8s-presubmit-scale-33_e2e-2208-62db2". ... skipping 236 lines ... e2e-2208-62db2-minion-group-xxbl Ready <none> 47s v1.27.0-alpha.0.46+8f2371bcceff79 e2e-2208-62db2-minion-group-zws7 Ready <none> 51s v1.27.0-alpha.0.46+8f2371bcceff79 e2e-2208-62db2-minion-heapster Ready <none> 71s v1.27.0-alpha.0.46+8f2371bcceff79 Warning: v1 ComponentStatus is deprecated in v1.19+ Validate output: Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-1 Healthy {"health":"true","reason":""} etcd-0 Healthy {"health":"true","reason":""} controller-manager Healthy ok scheduler Healthy ok [0;33mCluster validation encountered some problems, but cluster should be in working order[0m ...ignoring non-fatal errors in validate-cluster Done, listing cluster services: [0;32mKubernetes control plane[0m is running at [0;33mhttps://34.23.70.14[0m [0;32mGLBCDefaultBackend[0m is running at [0;33mhttps://34.23.70.14/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy[0m [0;32mCoreDNS[0m is running at [0;33mhttps://34.23.70.14/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy[0m [0;32mMetrics-server[0m is running at [0;33mhttps://34.23.70.14/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy[0m ... skipping 1099 lines ... I1124 11:18:45.914878 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(small-deployment-40): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:18:46.501802 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(small-deployment-58): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:18:46.816780 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(daemonset-0): Pods: 100 out of 100 created, 100 running (100 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:18:47.788240 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(small-deployment-228): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:18:48.488703 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(medium-deployment-8): Pods: 30 out of 30 created, 30 running (30 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:18:48.516377 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(small-deployment-45): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:18:48.552398 16020 wait_for_controlled_pods.go:319] WaitForControlledPodsRunning: running 3, deleted 0, timeout: 0, failed: 0 I1124 11:18:48.552628 16020 wait_for_controlled_pods.go:324] WaitForControlledPodsRunning: maxDuration=5.006056658s, operationTimeout=15m0s, ratio=0.01 I1124 11:18:48.552755 16020 wait_for_controlled_pods.go:338] WaitForControlledPodsRunning: 3/3 Jobs are running with all pods I1124 11:18:48.557670 16020 wait_for_controlled_pods.go:319] WaitForControlledPodsRunning: running 1, deleted 0, timeout: 0, failed: 0 I1124 11:18:48.557778 16020 wait_for_controlled_pods.go:324] WaitForControlledPodsRunning: maxDuration=5.00320272s, operationTimeout=15m0s, ratio=0.01 I1124 11:18:48.557816 16020 wait_for_controlled_pods.go:338] WaitForControlledPodsRunning: 1/1 DaemonSets are running with all pods I1124 11:18:48.557741 16020 wait_for_controlled_pods.go:319] WaitForControlledPodsRunning: running 0, deleted 0, timeout: 0, failed: 0 I1124 11:18:48.557871 16020 wait_for_controlled_pods.go:324] WaitForControlledPodsRunning: maxDuration=0s, operationTimeout=15m0s, ratio=0.00 I1124 11:18:48.557882 16020 wait_for_controlled_pods.go:338] WaitForControlledPodsRunning: 0/0 StatefulSets are running with all pods I1124 11:18:48.844681 16020 wait_for_controlled_pods.go:319] WaitForControlledPodsRunning: running 325, deleted 0, timeout: 0, failed: 0 I1124 11:18:48.844728 16020 wait_for_controlled_pods.go:324] WaitForControlledPodsRunning: maxDuration=10.004058114s, operationTimeout=15m0s, ratio=0.01 I1124 11:18:48.844748 16020 wait_for_controlled_pods.go:338] WaitForControlledPodsRunning: 325/325 Deployments are running with all pods I1124 11:18:48.844781 16020 simple_test_executor.go:171] Step "[step: 07] Waiting for 'create objects' to be completed" ended I1124 11:18:48.844845 16020 simple_test_executor.go:149] Step "[step: 08] Creating scheduler throughput measurements" started I1124 11:18:48.845007 16020 pod_startup_latency.go:132] PodStartupLatency: labelSelector(group = scheduler-throughput): starting pod startup latency measurement... I1124 11:18:48.845058 16020 wait_for_controlled_pods.go:215] WaitForControlledPodsRunning: starting wait for controlled pods measurement... ... skipping 4 lines ... I1124 11:18:48.988007 16020 simple_test_executor.go:171] Step "[step: 09] create scheduler throughput pods" ended I1124 11:18:48.988040 16020 simple_test_executor.go:149] Step "[step: 10] Waiting for scheduler throughput pods to be created" started I1124 11:18:48.988083 16020 wait_for_controlled_pods.go:246] WaitForControlledPodsRunning: waiting for controlled pods measurement... I1124 11:18:53.997922 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-2), controlledBy(scheduler-throughput-deployment-0): Pods: 573 out of 1000 created, 430 running (430 updated), 141 pending scheduled, 2 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:18:59.023089 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-2), controlledBy(scheduler-throughput-deployment-0): Pods: 1000 out of 1000 created, 915 running (915 updated), 85 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:19:04.044730 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-2), controlledBy(scheduler-throughput-deployment-0): Pods: 1000 out of 1000 created, 1000 running (1000 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:19:04.044850 16020 wait_for_controlled_pods.go:319] WaitForControlledPodsRunning: running 1, deleted 0, timeout: 0, failed: 0 I1124 11:19:04.044872 16020 wait_for_controlled_pods.go:324] WaitForControlledPodsRunning: maxDuration=15.056953335s, operationTimeout=20m0s, ratio=0.01 I1124 11:19:04.044906 16020 wait_for_controlled_pods.go:338] WaitForControlledPodsRunning: 1/1 Deployments are running with all pods I1124 11:19:04.044931 16020 simple_test_executor.go:171] Step "[step: 10] Waiting for scheduler throughput pods to be created" ended I1124 11:19:04.044956 16020 simple_test_executor.go:149] Step "[step: 11] Collecting scheduler throughput measurements" started I1124 11:19:04.045019 16020 pod_startup_latency.go:226] PodStartupLatency: labelSelector(group = scheduler-throughput): gathering pod startup latency measurement... I1124 11:19:04.045442 16020 scheduling_throughput.go:154] SchedulingThroughput: gathering data ... skipping 33 lines ... I1124 11:19:04.488630 16020 simple_test_executor.go:149] Step "[step: 13] Waiting for scheduler throughput pods to be deleted" started I1124 11:19:04.488674 16020 wait_for_controlled_pods.go:246] WaitForControlledPodsRunning: waiting for controlled pods measurement... I1124 11:19:04.488699 16020 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-0vgsu3-2), controlledBy(scheduler-throughput-deployment-0): starting with timeout: 19m59.99997493s I1124 11:19:09.500542 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-2), controlledBy(scheduler-throughput-deployment-0): Pods: 458 out of 0 created, 458 running (458 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 85 terminating, 0 unknown, 0 runningButNotReady I1124 11:19:14.501423 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-2), controlledBy(scheduler-throughput-deployment-0): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 42 terminating, 0 unknown, 0 runningButNotReady I1124 11:19:19.501667 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-2), controlledBy(scheduler-throughput-deployment-0): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:19:19.501740 16020 wait_for_controlled_pods.go:319] WaitForControlledPodsRunning: running 0, deleted 1, timeout: 0, failed: 0 I1124 11:19:19.501813 16020 wait_for_controlled_pods.go:324] WaitForControlledPodsRunning: maxDuration=15.013010552s, operationTimeout=20m0s, ratio=0.01 I1124 11:19:19.501835 16020 wait_for_controlled_pods.go:338] WaitForControlledPodsRunning: 0/0 Deployments are running with all pods I1124 11:19:19.501874 16020 simple_test_executor.go:171] Step "[step: 13] Waiting for scheduler throughput pods to be deleted" ended I1124 11:19:19.501899 16020 simple_test_executor.go:149] Step "[step: 14] Starting latency pod measurements" started I1124 11:19:19.501955 16020 wait_for_controlled_pods.go:215] WaitForControlledPodsRunning: starting wait for controlled pods measurement... I1124 11:19:19.502047 16020 pod_startup_latency.go:132] PodStartupLatency: labelSelector(group = latency): starting pod startup latency measurement... ... skipping 179 lines ... I1124 11:19:39.888252 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-76): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:19:39.900749 16020 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-101): starting with timeout: 14m59.999975033s I1124 11:19:40.089235 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-77): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:19:40.101974 16020 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-102): starting with timeout: 14m59.999970136s I1124 11:19:40.287893 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-78): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:19:40.302757 16020 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-103): starting with timeout: 14m59.99996692s error dialing prow@34.23.70.14:22: 'ssh: handshake failed: read tcp 10.60.202.33:33422->34.23.70.14:22: read: connection reset by peer', retrying error dialing prow@34.23.70.14:22: 'ssh: handshake failed: read tcp 10.60.202.33:33430->34.23.70.14:22: read: connection reset by peer', retrying I1124 11:19:40.489782 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-79): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:19:40.504648 16020 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-104): starting with timeout: 14m59.999975415s I1124 11:19:40.688811 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-80): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:19:40.706668 16020 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-105): starting with timeout: 14m59.999974965s I1124 11:19:40.889310 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-81): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:19:40.906369 16020 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-106): starting with timeout: 14m59.999976649s ... skipping 808 lines ... I1124 11:21:03.734580 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-494): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:21:03.937479 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-495): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:21:04.136812 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-496): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:21:04.337327 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-497): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:21:04.547754 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-498): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:21:04.738795 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-499): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:21:05.133625 16020 wait_for_controlled_pods.go:319] WaitForControlledPodsRunning: running 500, deleted 0, timeout: 0, failed: 0 I1124 11:21:05.133694 16020 wait_for_controlled_pods.go:324] WaitForControlledPodsRunning: maxDuration=5.002590247s, operationTimeout=15m0s, ratio=0.01 I1124 11:21:05.133766 16020 wait_for_controlled_pods.go:338] WaitForControlledPodsRunning: 500/500 Deployments are running with all pods I1124 11:21:05.133862 16020 simple_test_executor.go:171] Step "[step: 16] Waiting for latency pods to be running" ended I1124 11:21:05.133907 16020 simple_test_executor.go:149] Step "[step: 17] Deleting latency pods" started I1124 11:21:05.172174 16020 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-0): starting with timeout: 14m59.999974673s I1124 11:21:05.272525 16020 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-1): starting with timeout: 14m59.999975955s ... skipping 495 lines ... I1124 11:21:54.872279 16020 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-497): starting with timeout: 14m59.999974648s I1124 11:21:54.977577 16020 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-498): starting with timeout: 14m59.999975884s I1124 11:21:55.072687 16020 wait_for_pods.go:61] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(latency-deployment-499): starting with timeout: 14m59.999977208s I1124 11:21:55.072808 16020 simple_test_executor.go:171] Step "[step: 17] Deleting latency pods" ended I1124 11:21:55.072922 16020 simple_test_executor.go:149] Step "[step: 18] Waiting for latency pods to be deleted" started I1124 11:21:55.072978 16020 wait_for_controlled_pods.go:246] WaitForControlledPodsRunning: waiting for controlled pods measurement... I1124 11:22:00.121502 16020 wait_for_controlled_pods.go:319] WaitForControlledPodsRunning: running 0, deleted 500, timeout: 0, failed: 0 I1124 11:22:00.121548 16020 wait_for_controlled_pods.go:324] WaitForControlledPodsRunning: maxDuration=5.00297544s, operationTimeout=15m0s, ratio=0.01 I1124 11:22:00.121566 16020 wait_for_controlled_pods.go:338] WaitForControlledPodsRunning: 0/0 Deployments are running with all pods I1124 11:22:00.121592 16020 simple_test_executor.go:171] Step "[step: 18] Waiting for latency pods to be deleted" ended I1124 11:22:00.121614 16020 simple_test_executor.go:149] Step "[step: 19] Collecting pod startup latency" started I1124 11:22:00.121653 16020 pod_startup_latency.go:226] PodStartupLatency: labelSelector(group = latency): gathering pod startup latency measurement... I1124 11:22:00.525076 16020 phase_latency.go:141] PodStartupLatency: 100 worst run_to_watch latencies: [{test-0vgsu3-1/latency-deployment-99-58c55f9-khvtv 1.406963037s} {test-0vgsu3-1/latency-deployment-280-69f75dc9cf-nfllr 1.407677142s} {test-0vgsu3-1/latency-deployment-238-7fb5c5d7cc-zqp9k 1.415904369s} {test-0vgsu3-1/latency-deployment-399-6594b9cfdf-tqmgw 1.418583791s} {test-0vgsu3-1/latency-deployment-170-6b5d45fb47-krlql 1.418776262s} {test-0vgsu3-1/latency-deployment-445-5799bcdd7c-lrgzx 1.419779745s} {test-0vgsu3-1/latency-deployment-223-66ff7578b5-k5p2k 1.425172203s} {test-0vgsu3-1/latency-deployment-453-7d644dcf8c-ztlpc 1.425365421s} {test-0vgsu3-1/latency-deployment-435-7448d885dc-7fb58 1.426024972s} {test-0vgsu3-1/latency-deployment-404-57fff9dd87-2t7wf 1.426884046s} {test-0vgsu3-1/latency-deployment-409-7b59577585-vzqb5 1.431560112s} {test-0vgsu3-1/latency-deployment-310-6874655447-wwzrv 1.432544755s} {test-0vgsu3-1/latency-deployment-258-75cf4c56bf-zsjjp 1.434691203s} {test-0vgsu3-1/latency-deployment-360-54bf874f97-lgg7m 1.442532792s} {test-0vgsu3-1/latency-deployment-289-7578bf5445-hwxs6 1.442844709s} {test-0vgsu3-1/latency-deployment-41-864fcfc9d7-6h6zb 1.44353747s} {test-0vgsu3-1/latency-deployment-499-7c8bc8bd8c-vgwr2 1.445326871s} {test-0vgsu3-1/latency-deployment-350-7848bf455-ptsbc 1.447103579s} {test-0vgsu3-1/latency-deployment-255-684fccdc4c-cxvnq 1.447565752s} {test-0vgsu3-1/latency-deployment-396-54f5647f47-ckxpl 1.448998922s} {test-0vgsu3-1/latency-deployment-293-64789fdbf9-88ddx 1.449135145s} {test-0vgsu3-1/latency-deployment-250-6bc8986cdf-6nd6z 1.455455766s} {test-0vgsu3-1/latency-deployment-318-6858fc6679-8k2s5 1.457108217s} {test-0vgsu3-1/latency-deployment-220-75fdbbccf9-vcs4g 1.458725424s} {test-0vgsu3-1/latency-deployment-111-787cdb5497-qq59n 1.459101142s} {test-0vgsu3-1/latency-deployment-353-67c7d8dbf-k7vgx 1.468106843s} {test-0vgsu3-1/latency-deployment-470-7b57cd8d9c-b25c8 1.471515535s} {test-0vgsu3-1/latency-deployment-14-8577448d87-dc5s2 1.472916637s} {test-0vgsu3-1/latency-deployment-184-76b5d7586f-ldtsk 1.482424244s} {test-0vgsu3-1/latency-deployment-226-db7fcddc9-4jz5f 1.48668512s} {test-0vgsu3-1/latency-deployment-235-546789fb59-pb9l8 1.486757302s} {test-0vgsu3-1/latency-deployment-495-5788dc8b99-7zlcl 1.487730192s} {test-0vgsu3-1/latency-deployment-10-cddb4d6d7-k4m8m 1.495434816s} {test-0vgsu3-1/latency-deployment-488-6b4856dfcf-7hwrs 1.50103464s} {test-0vgsu3-1/latency-deployment-351-9dbb78cdf-b89pn 1.502173453s} {test-0vgsu3-1/latency-deployment-380-79f465484f-9lfcp 1.504167855s} {test-0vgsu3-1/latency-deployment-395-5f7dcc8497-2g6v6 1.512527793s} {test-0vgsu3-1/latency-deployment-62-8474966d55-42cnw 1.512972166s} {test-0vgsu3-1/latency-deployment-324-6dd8c6ddb9-dqm2q 1.51478345s} {test-0vgsu3-1/latency-deployment-449-ccb887857-7hn9l 1.518957937s} {test-0vgsu3-1/latency-deployment-151-6775f4b54f-6vp2l 1.526994644s} {test-0vgsu3-1/latency-deployment-446-5d54877575-wj8wv 1.529421613s} {test-0vgsu3-1/latency-deployment-224-6f664957f9-5zzsk 1.530918702s} {test-0vgsu3-1/latency-deployment-419-d4dd89659-t85hd 1.544962273s} {test-0vgsu3-1/latency-deployment-363-6f4cc7445f-t2hwm 1.552682939s} {test-0vgsu3-1/latency-deployment-493-5498db6f59-j4mdf 1.559264921s} {test-0vgsu3-1/latency-deployment-126-8959455b5-8k6jh 1.560416424s} {test-0vgsu3-1/latency-deployment-260-76589b78c9-cwv6g 1.56319463s} {test-0vgsu3-1/latency-deployment-319-6c78bcb66f-8hqt5 1.566981927s} {test-0vgsu3-1/latency-deployment-485-76c54579c7-7r8wk 1.568829925s} {test-0vgsu3-1/latency-deployment-494-7dbdd7bbf-zncxm 1.572410643s} {test-0vgsu3-1/latency-deployment-219-6f8d7b69f9-g2xcn 1.575927959s} {test-0vgsu3-1/latency-deployment-101-7bfd77dc59-m94sv 1.578610944s} {test-0vgsu3-1/latency-deployment-425-7f8cf5d98f-7f5w2 1.586247382s} {test-0vgsu3-1/latency-deployment-245-5456dd46c7-p8rmx 1.589405292s} {test-0vgsu3-1/latency-deployment-405-c84f8d55c-mbxdz 1.594717437s} {test-0vgsu3-1/latency-deployment-369-84f6946b55-qfl7t 1.622716321s} {test-0vgsu3-1/latency-deployment-169-967f4dfb7-gfcgm 1.62327075s} {test-0vgsu3-1/latency-deployment-455-688bfbfbcf-nsgwq 1.626041756s} {test-0vgsu3-1/latency-deployment-155-74f8cc674f-cc4hw 1.649011106s} {test-0vgsu3-1/latency-deployment-210-6659cc6f77-7bsqv 1.655702788s} {test-0vgsu3-1/latency-deployment-30-cdfb9895f-5mclc 1.655749709s} {test-0vgsu3-1/latency-deployment-61-658fd58d75-plgpp 1.656988471s} {test-0vgsu3-1/latency-deployment-275-79d4fd5f55-xhwvt 1.665436931s} {test-0vgsu3-1/latency-deployment-96-5d858df67-2x5hz 1.673684054s} {test-0vgsu3-1/latency-deployment-370-5b6c96c947-hvvjc 1.684875849s} {test-0vgsu3-1/latency-deployment-464-766864479-rssnb 1.690056252s} {test-0vgsu3-1/latency-deployment-384-846b98d5c-pqlcd 1.690199093s} {test-0vgsu3-1/latency-deployment-315-6dd9dcd8d9-qrtf5 1.70960387s} {test-0vgsu3-1/latency-deployment-334-6c5897cf6c-vznvs 1.714843945s} {test-0vgsu3-1/latency-deployment-486-f46bb465f-gpb7j 1.7318124s} {test-0vgsu3-1/latency-deployment-305-765dddc9c5-ggs8c 1.736079543s} {test-0vgsu3-1/latency-deployment-70-6775487947-99s5q 1.766182537s} {test-0vgsu3-1/latency-deployment-375-786d4ffccc-t8hr8 1.771489043s} {test-0vgsu3-1/latency-deployment-251-7b887dcd6c-d9qpn 1.79635645s} {test-0vgsu3-1/latency-deployment-365-66f54cc447-kjdsl 1.809351567s} {test-0vgsu3-1/latency-deployment-21-58cd6786f9-lmdsr 1.817995557s} {test-0vgsu3-1/latency-deployment-246-6fb8b9fd79-6g5qr 1.81803473s} {test-0vgsu3-1/latency-deployment-410-6f6ccd4769-km999 1.818783269s} {test-0vgsu3-1/latency-deployment-1-6779775c85-84n5n 1.821624162s} {test-0vgsu3-1/latency-deployment-474-598bc58d5c-4bjfz 1.823086505s} {test-0vgsu3-1/latency-deployment-475-6b6c44874c-2s8bg 1.823428419s} {test-0vgsu3-1/latency-deployment-460-687df8c967-bj8df 1.823581425s} {test-0vgsu3-1/latency-deployment-91-df97f97c5-s5tjl 1.829427443s} {test-0vgsu3-1/latency-deployment-131-6f54bb85-2524b 1.83990479s} {test-0vgsu3-1/latency-deployment-156-55f7c9b7c5-qk2gw 1.843530207s} {test-0vgsu3-1/latency-deployment-121-597846c575-n25jg 1.849202119s} {test-0vgsu3-1/latency-deployment-116-b85f7f6ff-5s6lb 1.881406891s} {test-0vgsu3-1/latency-deployment-42-7d7d5f7bb7-8l4dd 1.887934776s} {test-0vgsu3-1/latency-deployment-228-689b7d54c9-f8xd2 1.902137789s} {test-0vgsu3-1/latency-deployment-161-648f7f57f-9g9xg 1.909022375s} {test-0vgsu3-1/latency-deployment-386-fc6f7f585-fzjqv 1.909048896s} {test-0vgsu3-1/latency-deployment-51-6bc4b99b7-2ml42 1.910058673s} {test-0vgsu3-1/latency-deployment-361-5f7495cb5f-z5l2j 1.928783591s} {test-0vgsu3-1/latency-deployment-176-786f7bd4c5-shvt9 1.934463303s} {test-0vgsu3-1/latency-deployment-81-76d6c97475-4rlhm 1.941223576s} {test-0vgsu3-1/latency-deployment-11-5d46758d7-qnmkc 1.945616213s} {test-0vgsu3-1/latency-deployment-256-598ff5d545-t7mps 1.959740077s} {test-0vgsu3-1/latency-deployment-106-5968dc8bdf-df62w 1.962770805s} {test-0vgsu3-1/latency-deployment-276-686564d685-4sb6q 1.98893693s}] ... skipping 582 lines ... I1124 11:22:42.007185 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(small-deployment-235): Pods: 6 out of 6 created, 6 running (6 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:22:42.031353 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(small-deployment-153): Pods: 3 out of 3 created, 3 running (3 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:22:42.094818 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(big-job-0): Pods: 249 out of 249 created, 249 running (249 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:22:42.221987 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(small-deployment-179): Pods: 3 out of 3 created, 3 running (3 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:22:42.445324 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(small-deployment-9): Pods: 2 out of 2 created, 2 running (2 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:22:42.445650 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(small-deployment-256): Pods: 4 out of 4 created, 4 running (4 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:22:42.487302 16020 wait_for_controlled_pods.go:319] WaitForControlledPodsRunning: running 0, deleted 0, timeout: 0, failed: 0 I1124 11:22:42.487367 16020 wait_for_controlled_pods.go:324] WaitForControlledPodsRunning: maxDuration=0s, operationTimeout=15m0s, ratio=0.00 I1124 11:22:42.487384 16020 wait_for_controlled_pods.go:338] WaitForControlledPodsRunning: 0/0 StatefulSets are running with all pods I1124 11:22:42.487393 16020 wait_for_controlled_pods.go:319] WaitForControlledPodsRunning: running 3, deleted 0, timeout: 0, failed: 0 I1124 11:22:42.487420 16020 wait_for_controlled_pods.go:324] WaitForControlledPodsRunning: maxDuration=5.006786109s, operationTimeout=15m0s, ratio=0.01 I1124 11:22:42.487438 16020 wait_for_controlled_pods.go:338] WaitForControlledPodsRunning: 3/3 Jobs are running with all pods I1124 11:22:42.675484 16020 wait_for_controlled_pods.go:319] WaitForControlledPodsRunning: running 325, deleted 0, timeout: 0, failed: 0 I1124 11:22:42.675538 16020 wait_for_controlled_pods.go:324] WaitForControlledPodsRunning: maxDuration=5.010741218s, operationTimeout=15m0s, ratio=0.01 I1124 11:22:42.675554 16020 wait_for_controlled_pods.go:338] WaitForControlledPodsRunning: 325/325 Deployments are running with all pods I1124 11:22:46.809309 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(daemonset-0): Pods: 98 out of 100 created, 91 running (91 updated), 7 pending scheduled, 0 not scheduled, 0 inactive, 2 terminating, 0 unknown, 0 runningButNotReady I1124 11:22:51.812004 16020 wait_for_pods.go:111] WaitForControlledPodsRunning: namespace(test-0vgsu3-1), controlledBy(daemonset-0): Pods: 100 out of 100 created, 100 running (100 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1124 11:22:51.812096 16020 wait_for_controlled_pods.go:319] WaitForControlledPodsRunning: running 1, deleted 0, timeout: 0, failed: 0 I1124 11:22:51.812109 16020 wait_for_controlled_pods.go:324] WaitForControlledPodsRunning: maxDuration=30.120616218s, operationTimeout=15m0s, ratio=0.03 I1124 11:22:51.812125 16020 wait_for_controlled_pods.go:338] WaitForControlledPodsRunning: 1/1 DaemonSets are running with all pods I1124 11:22:51.812141 16020 simple_test_executor.go:171] Step "[step: 22] Waiting for 'scale and update objects' to be completed" ended I1124 11:22:51.812162 16020 simple_test_executor.go:149] Step "[step: 23] Starting measurement for 'delete objects'" started I1124 11:22:51.812206 16020 wait_for_controlled_pods.go:212] WaitForControlledPodsRunning: wait for controlled pods measurement already running I1124 11:22:51.812216 16020 wait_for_controlled_pods.go:212] WaitForControlledPodsRunning: wait for controlled pods measurement already running I1124 11:22:51.812225 16020 wait_for_controlled_pods.go:212] WaitForControlledPodsRunning: wait for controlled pods measurement already running I1124 11:22:51.812232 16020 wait_for_controlled_pods.go:212] WaitForControlledPodsRunning: wait for controlled pods measurement already running I1124 11:22:51.812237 16020 simple_test_executor.go:171] Step "[step: 23] Starting measurement for 'delete objects'" ended I1124 11:22:51.812244 16020 simple_test_executor.go:149] Step "[step: 24] delete objects" started panic: runtime error: index out of range [0] with length 0 goroutine 9636 [running]: k8s.io/perf-tests/clusterloader2/pkg/test.(*simpleExecutor).ExecutePhase(0x2fa6af0, {0x20d1080, 0xc00003e180}, 0xc0009dde00) /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/test/simple_test_executor.go:225 +0x1105 k8s.io/perf-tests/clusterloader2/pkg/test.(*simpleExecutor).ExecuteStep.func2() /home/prow/go/src/k8s.io/perf-tests/clusterloader2/pkg/test/simple_test_executor.go:166 +0x32 ... skipping 21 lines ... Specify --start=76820 in the next get-serial-port-output invocation to get only the new output starting from here. scp: /var/log/cluster-autoscaler.log*: No such file or directory scp: /var/log/fluentd.log*: No such file or directory scp: /var/log/kubelet.cov*: No such file or directory scp: /var/log/cl2-**: No such file or directory scp: /var/log/startupscript.log*: No such file or directory ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. Dumping logs from nodes to GCS directly at 'gs://sig-scalability-logs/pull-perf-tests-clusterloader2/1595734616138846208' using logexporter namespace/logexporter created secret/google-service-account created daemonset.apps/logexporter created Listing marker files (gs://sig-scalability-logs/pull-perf-tests-clusterloader2/1595734616138846208/logexported-nodes-registry) for successful nodes... CommandException: One or more URLs matched no objects. ... skipping 411 lines ... Property "users.k8s-presubmit-scale-33_e2e-2208-62db2-basic-auth" unset. Property "contexts.k8s-presubmit-scale-33_e2e-2208-62db2" unset. Cleared config for k8s-presubmit-scale-33_e2e-2208-62db2 from /workspace/.kube/config Done 2022/11/24 11:35:24 process.go:155: Step './hack/e2e-internal/e2e-down.sh' finished in 9m24.028562537s 2022/11/24 11:35:24 process.go:96: Saved XML output to /logs/artifacts/junit_runner.xml. 2022/11/24 11:35:24 main.go:328: Something went wrong: encountered 1 errors: [error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --nodes=100 --prometheus-scrape-node-exporter --provider=gce --report-dir=/logs/artifacts --testconfig=testing/load/config.yaml --testconfig=testing/huge-service/config.yaml --testconfig=testing/access-tokens/config.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/use_simple_latency_query.yaml --testoverrides=./testing/overrides/load_throughput.yaml: exit status 2] Traceback (most recent call last): File "/workspace/scenarios/kubernetes_e2e.py", line 723, in <module> main(parse_args()) File "/workspace/scenarios/kubernetes_e2e.py", line 569, in main mode.start(runner_args) File "/workspace/scenarios/kubernetes_e2e.py", line 228, in start ... skipping 9 lines ...