Recent runs || View in Spyglass
PR | pacoxu: default memoryThrottlingFactor to 0.9 and optimize the memory.high formulas |
Result | ABORTED |
Tests | 0 failed / 71 succeeded |
Started | |
Elapsed | 52m49s |
Revision | e34bacce0c3fc243db4d6c2ec0a68a39480551d3 |
Refs |
115371 |
ClusterLoaderV2 huge-service overall (testing/huge-service/config.yaml)
ClusterLoaderV2 huge-service: [step: 01] starting measurements [00] - APIResponsivenessPrometheus
ClusterLoaderV2 huge-service: [step: 01] starting measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2 huge-service: [step: 01] starting measurements [02] - TestMetrics
ClusterLoaderV2 huge-service: [step: 01] starting measurements [03] - InClusterNetworkLatency
ClusterLoaderV2 huge-service: [step: 02] Create huge-service
ClusterLoaderV2 huge-service: [step: 03] Creating huge-service measurements [00] - WaitForHugeServiceDeployments
ClusterLoaderV2 huge-service: [step: 04] Creating huge-service pods
ClusterLoaderV2 huge-service: [step: 05] Waiting for huge-service pods to be created [00] - WaitForHugeServiceDeployments
ClusterLoaderV2 huge-service: [step: 06] Updating huge-service pods
ClusterLoaderV2 huge-service: [step: 07] Waiting for huge-service pods to be updated [00] - WaitForHugeServiceDeployments
ClusterLoaderV2 huge-service: [step: 08] Deleting huge-service pods
ClusterLoaderV2 huge-service: [step: 09] Waiting for huge-service pods to be deleted [00] - WaitForHugeServiceDeployments
ClusterLoaderV2 huge-service: [step: 10] Delete huge-service
ClusterLoaderV2 huge-service: [step: 11] gathering measurements [00] - APIResponsivenessPrometheus
ClusterLoaderV2 huge-service: [step: 11] gathering measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2 huge-service: [step: 11] gathering measurements [02] - TestMetrics
ClusterLoaderV2 huge-service: [step: 11] gathering measurements [03] - InClusterNetworkLatency
ClusterLoaderV2 load overall (testing/load/config.yaml)
ClusterLoaderV2 load: [step: 01] starting measurements [00] - APIResponsivenessPrometheus
ClusterLoaderV2 load: [step: 01] starting measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2 load: [step: 01] starting measurements [02] - CreatePhasePodStartupLatency
ClusterLoaderV2 load: [step: 01] starting measurements [03] - InClusterNetworkLatency
ClusterLoaderV2 load: [step: 01] starting measurements [04] - SLOMeasurement
ClusterLoaderV2 load: [step: 01] starting measurements [05] - NetworkProgrammingLatency
ClusterLoaderV2 load: [step: 01] starting measurements [06] - Kube-proxy partial iptables restore failures
ClusterLoaderV2 load: [step: 01] starting measurements [07] - APIAvailability
ClusterLoaderV2 load: [step: 01] starting measurements [08] - Quotas total usage
ClusterLoaderV2 load: [step: 01] starting measurements [09] - TestMetrics
ClusterLoaderV2 load: [step: 02] Creating k8s services
ClusterLoaderV2 load: [step: 03] Creating PriorityClass for DaemonSets
ClusterLoaderV2 load: [step: 04] create objects configmaps and secrets
ClusterLoaderV2 load: [step: 05] Starting measurement for 'create objects' [00] -
ClusterLoaderV2 load: [step: 06] create objects
ClusterLoaderV2 load: [step: 07] Waiting for 'create objects' to be completed [00] -
ClusterLoaderV2 load: [step: 08] Creating scheduler throughput measurements [00] - HighThroughputPodStartupLatency
ClusterLoaderV2 load: [step: 08] Creating scheduler throughput measurements [01] - WaitForSchedulerThroughputDeployments
ClusterLoaderV2 load: [step: 08] Creating scheduler throughput measurements [02] - SchedulingThroughput
ClusterLoaderV2 load: [step: 09] create scheduler throughput pods
ClusterLoaderV2 load: [step: 10] Waiting for scheduler throughput pods to be created [00] - WaitForSchedulerThroughputDeployments
ClusterLoaderV2 load: [step: 11] Collecting scheduler throughput measurements [00] - HighThroughputPodStartupLatency
ClusterLoaderV2 load: [step: 11] Collecting scheduler throughput measurements [01] - SchedulingThroughput
ClusterLoaderV2 load: [step: 12] delete scheduler throughput pods
ClusterLoaderV2 load: [step: 13] Waiting for scheduler throughput pods to be deleted [00] - WaitForSchedulerThroughputDeployments
ClusterLoaderV2 load: [step: 14] Starting latency pod measurements [00] - PodStartupLatency
ClusterLoaderV2 load: [step: 14] Starting latency pod measurements [01] - WaitForRunningLatencyDeployments
ClusterLoaderV2 load: [step: 15] Creating latency pods
ClusterLoaderV2 load: [step: 16] Waiting for latency pods to be running [00] - WaitForRunningLatencyDeployments
ClusterLoaderV2 load: [step: 17] Deleting latency pods
ClusterLoaderV2 load: [step: 18] Waiting for latency pods to be deleted [00] - WaitForRunningLatencyDeployments
ClusterLoaderV2 load: [step: 19] Collecting pod startup latency [00] - PodStartupLatency
ClusterLoaderV2 load: [step: 20] Starting measurement for 'scale and update objects' [00] -
ClusterLoaderV2 load: [step: 21] scale and update objects
ClusterLoaderV2 load: [step: 22] Waiting for 'scale and update objects' to be completed [00] -
ClusterLoaderV2 load: [step: 23] Starting measurement for 'delete objects' [00] -
ClusterLoaderV2 load: [step: 24] delete objects
ClusterLoaderV2 load: [step: 25] Waiting for 'delete objects' to be completed [00] -
ClusterLoaderV2 load: [step: 25] Waiting for 'delete objects' to be completed [01] - WaitForPVCsToBeDeleted
ClusterLoaderV2 load: [step: 26] delete objects configmaps and secrets
ClusterLoaderV2 load: [step: 27] Deleting PriorityClass for DaemonSets
ClusterLoaderV2 load: [step: 28] Deleting k8s services
ClusterLoaderV2 load: [step: 29] gathering measurements [00] - APIResponsivenessPrometheus
ClusterLoaderV2 load: [step: 29] gathering measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2 load: [step: 29] gathering measurements [02] - CreatePhasePodStartupLatency
ClusterLoaderV2 load: [step: 29] gathering measurements [03] - InClusterNetworkLatency
ClusterLoaderV2 load: [step: 29] gathering measurements [04] - SLOMeasurement
ClusterLoaderV2 load: [step: 29] gathering measurements [05] - NetworkProgrammingLatency
ClusterLoaderV2 load: [step: 29] gathering measurements [06] - Kube-proxy partial iptables restore failures
ClusterLoaderV2 load: [step: 29] gathering measurements [07] - APIAvailability
ClusterLoaderV2 load: [step: 29] gathering measurements [08] - Quotas total usage
ClusterLoaderV2 load: [step: 29] gathering measurements [09] - TestMetrics
... skipping 694 lines ... Looking for address 'e2e-115371-95a39-master-ip' Looking for address 'e2e-115371-95a39-master-internal-ip' Using master: e2e-115371-95a39-master (external IP: 34.148.87.105; internal IP: 10.40.0.2) Waiting up to 300 seconds for cluster initialization. This will continually check to see if the API for kubernetes is reachable. This may time out if there was some uncaught error during start up. Kubernetes cluster created. Cluster "k8s-infra-e2e-boskos-scale-14_e2e-115371-95a39" set. User "k8s-infra-e2e-boskos-scale-14_e2e-115371-95a39" set. Context "k8s-infra-e2e-boskos-scale-14_e2e-115371-95a39" created. Switched to context "k8s-infra-e2e-boskos-scale-14_e2e-115371-95a39". ... skipping 238 lines ... e2e-115371-95a39-minion-group-zc32 Ready <none> 52s v1.27.0-alpha.1.71+a8a34de0d577be e2e-115371-95a39-minion-group-zfxt Ready <none> 52s v1.27.0-alpha.1.71+a8a34de0d577be e2e-115371-95a39-minion-heapster Ready <none> 66s v1.27.0-alpha.1.71+a8a34de0d577be Warning: v1 ComponentStatus is deprecated in v1.19+ Validate output: Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-1 Healthy {"health":"true","reason":""} etcd-0 Healthy {"health":"true","reason":""} scheduler Healthy ok controller-manager Healthy ok [0;33mCluster validation encountered some problems, but cluster should be in working order[0m ...ignoring non-fatal errors in validate-cluster Done, listing cluster services: [0;32mKubernetes control plane[0m is running at [0;33mhttps://34.148.87.105[0m [0;32mGLBCDefaultBackend[0m is running at [0;33mhttps://34.148.87.105/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy[0m [0;32mCoreDNS[0m is running at [0;33mhttps://34.148.87.105/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy[0m [0;32mMetrics-server[0m is running at [0;33mhttps://34.148.87.105/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy[0m ... skipping 1119 lines ... I0128 10:14:40.523277 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(small-deployment-241): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:14:40.577472 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(small-deployment-174): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:14:41.004311 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(small-deployment-97): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:14:41.658181 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(small-deployment-155): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:14:41.916660 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(small-deployment-242): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:14:42.333460 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(small-deployment-25): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:14:42.371137 90281 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 2, deleted 0, timeout: 0, failed: 0 I0128 10:14:42.371168 90281 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=45.008795578s, operationTimeout=15m0s, ratio=0.05 I0128 10:14:42.371184 90281 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 2/2 StatefulSets are running with all pods I0128 10:14:42.371330 90281 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 1, deleted 0, timeout: 0, failed: 0 I0128 10:14:42.371362 90281 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=5.002952101s, operationTimeout=15m0s, ratio=0.01 I0128 10:14:42.371378 90281 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 1/1 DaemonSets are running with all pods I0128 10:14:42.372579 90281 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 3, deleted 0, timeout: 0, failed: 0 I0128 10:14:42.372599 90281 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=5.005256005s, operationTimeout=15m0s, ratio=0.01 I0128 10:14:42.372611 90281 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 3/3 Jobs are running with all pods I0128 10:14:42.547497 90281 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 323, deleted 0, timeout: 0, failed: 0 I0128 10:14:42.547525 90281 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=10.001700742s, operationTimeout=15m0s, ratio=0.01 I0128 10:14:42.547540 90281 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 323/323 Deployments are running with all pods I0128 10:14:42.547560 90281 simple_test_executor.go:171] Step "[step: 07] Waiting for 'create objects' to be completed" ended I0128 10:14:42.547578 90281 simple_test_executor.go:149] Step "[step: 08] Creating scheduler throughput measurements" started I0128 10:14:42.547678 90281 wait_for_controlled_pods.go:257] WaitForControlledPodsRunning: starting wait for controlled pods measurement... I0128 10:14:42.547738 90281 pod_startup_latency.go:132] PodStartupLatency: labelSelector(group = scheduler-throughput): starting pod startup latency measurement... ... skipping 4 lines ... I0128 10:14:42.689864 90281 simple_test_executor.go:171] Step "[step: 09] create scheduler throughput pods" ended I0128 10:14:42.689894 90281 simple_test_executor.go:149] Step "[step: 10] Waiting for scheduler throughput pods to be created" started I0128 10:14:42.689936 90281 wait_for_controlled_pods.go:288] WaitForControlledPodsRunning: waiting for controlled pods measurement... I0128 10:14:47.698994 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-2), controlledBy(scheduler-throughput-deployment-0): Pods: 572 out of 1000 created, 420 running (420 updated), 151 pending scheduled, 1 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:14:52.717509 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-2), controlledBy(scheduler-throughput-deployment-0): Pods: 1000 out of 1000 created, 926 running (926 updated), 74 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:14:57.758942 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-2), controlledBy(scheduler-throughput-deployment-0): Pods: 1000 out of 1000 created, 1000 running (1000 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:14:57.758998 90281 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 1, deleted 0, timeout: 0, failed: 0 I0128 10:14:57.759010 90281 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=15.069524048s, operationTimeout=20m0s, ratio=0.01 I0128 10:14:57.759025 90281 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 1/1 Deployments are running with all pods I0128 10:14:57.759047 90281 simple_test_executor.go:171] Step "[step: 10] Waiting for scheduler throughput pods to be created" ended I0128 10:14:57.759067 90281 simple_test_executor.go:149] Step "[step: 11] Collecting scheduler throughput measurements" started I0128 10:14:57.759110 90281 scheduling_throughput.go:154] SchedulingThroughput: gathering data I0128 10:14:57.759141 90281 pod_startup_latency.go:226] PodStartupLatency: labelSelector(group = scheduler-throughput): gathering pod startup latency measurement... ... skipping 33 lines ... I0128 10:14:58.161771 90281 simple_test_executor.go:149] Step "[step: 13] Waiting for scheduler throughput pods to be deleted" started I0128 10:14:58.161795 90281 wait_for_controlled_pods.go:288] WaitForControlledPodsRunning: waiting for controlled pods measurement... I0128 10:14:58.161998 90281 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-dmn73d-2), controlledBy(scheduler-throughput-deployment-0): starting with timeout: 19m59.999963267s I0128 10:15:03.173432 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-2), controlledBy(scheduler-throughput-deployment-0): Pods: 458 out of 0 created, 458 running (458 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 85 terminating, 0 unknown, 0 runningButNotReady I0128 10:15:08.174895 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-2), controlledBy(scheduler-throughput-deployment-0): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 40 terminating, 0 unknown, 0 runningButNotReady I0128 10:15:13.175955 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-2), controlledBy(scheduler-throughput-deployment-0): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:15:13.176003 90281 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 0, deleted 1, timeout: 0, failed: 0 I0128 10:15:13.176021 90281 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=15.013986058s, operationTimeout=20m0s, ratio=0.01 I0128 10:15:13.176041 90281 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 0/0 Deployments are running with all pods I0128 10:15:13.176067 90281 simple_test_executor.go:171] Step "[step: 13] Waiting for scheduler throughput pods to be deleted" ended I0128 10:15:13.176095 90281 simple_test_executor.go:149] Step "[step: 14] Starting latency pod measurements" started I0128 10:15:13.176162 90281 wait_for_controlled_pods.go:257] WaitForControlledPodsRunning: starting wait for controlled pods measurement... I0128 10:15:13.176265 90281 pod_startup_latency.go:132] PodStartupLatency: labelSelector(group = latency): starting pod startup latency measurement... ... skipping 999 lines ... I0128 10:16:57.396294 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(latency-deployment-494): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:16:57.598683 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(latency-deployment-495): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:16:57.799951 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(latency-deployment-496): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:16:57.999490 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(latency-deployment-497): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:16:58.202017 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(latency-deployment-498): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:16:58.401140 90281 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(latency-deployment-499): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 10:16:58.780076 90281 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 500, deleted 0, timeout: 0, failed: 0 I0128 10:16:58.780109 90281 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=5.001516276s, operationTimeout=15m0s, ratio=0.01 I0128 10:16:58.780127 90281 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 500/500 Deployments are running with all pods I0128 10:16:58.780142 90281 simple_test_executor.go:171] Step "[step: 16] Waiting for latency pods to be running" ended I0128 10:16:58.780160 90281 simple_test_executor.go:149] Step "[step: 17] Deleting latency pods" started I0128 10:16:58.827427 90281 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(latency-deployment-0): starting with timeout: 14m59.999978064s I0128 10:16:58.919189 90281 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(latency-deployment-1): starting with timeout: 14m59.99997991s ... skipping 495 lines ... I0128 10:17:48.516755 90281 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(latency-deployment-497): starting with timeout: 14m59.999979002s I0128 10:17:48.617609 90281 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(latency-deployment-498): starting with timeout: 14m59.999985333s I0128 10:17:48.717308 90281 simple_test_executor.go:171] Step "[step: 17] Deleting latency pods" ended I0128 10:17:48.717343 90281 simple_test_executor.go:149] Step "[step: 18] Waiting for latency pods to be deleted" started I0128 10:17:48.717376 90281 wait_for_controlled_pods.go:288] WaitForControlledPodsRunning: waiting for controlled pods measurement... I0128 10:17:48.717472 90281 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-dmn73d-1), controlledBy(latency-deployment-499): starting with timeout: 14m59.999978254s I0128 10:17:53.763387 90281 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 0, deleted 500, timeout: 0, failed: 0 I0128 10:17:53.763424 90281 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=5.001170112s, operationTimeout=15m0s, ratio=0.01 I0128 10:17:53.763440 90281 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 0/0 Deployments are running with all pods I0128 10:17:53.763465 90281 simple_test_executor.go:171] Step "[step: 18] Waiting for latency pods to be deleted" ended I0128 10:17:53.763483 90281 simple_test_executor.go:149] Step "[step: 19] Collecting pod startup latency" started I0128 10:17:53.763511 90281 pod_startup_latency.go:226] PodStartupLatency: labelSelector(group = latency): gathering pod startup latency measurement... I0128 10:17:54.249911 90281 phase_latency.go:141] PodStartupLatency: 100 worst schedule_to_watch latencies: [{test-dmn73d-1/latency-deployment-211-5c848b5c59-4zffl 2.350008749s} {test-dmn73d-1/latency-deployment-357-b8cb7f69-wsxjk 2.353569193s} {test-dmn73d-1/latency-deployment-106-5968dc8bdf-5jprt 2.354565792s} {test-dmn73d-1/latency-deployment-399-6594b9cfdf-g2wb4 2.356721885s} {test-dmn73d-1/latency-deployment-387-758bcb6d79-4lgkg 2.356910831s} {test-dmn73d-1/latency-deployment-6-58ff4cdb9f-7z4zr 2.375107759s} {test-dmn73d-1/latency-deployment-472-58d99d6cf7-s29xn 2.378965096s} {test-dmn73d-1/latency-deployment-301-5c75dbfc4c-hmt5l 2.381095576s} {test-dmn73d-1/latency-deployment-447-54d959f84f-vnsl2 2.38221125s} {test-dmn73d-1/latency-deployment-67-75f744f959-m9dg2 2.382240581s} {test-dmn73d-1/latency-deployment-401-7d4f474f5c-t6rj8 2.38562989s} {test-dmn73d-1/latency-deployment-121-597846c575-4mdb2 2.389615759s} {test-dmn73d-1/latency-deployment-215-765f6b6d5f-95swd 2.391041475s} {test-dmn73d-1/latency-deployment-7-547655c6df-swcgf 2.39579885s} {test-dmn73d-1/latency-deployment-232-6c796dbb85-f6fft 2.397807504s} {test-dmn73d-1/latency-deployment-157-596cf84d6f-dzf8n 2.399755754s} {test-dmn73d-1/latency-deployment-172-69cc877955-wkkwl 2.404574684s} {test-dmn73d-1/latency-deployment-68-7496dcd4b7-55pnc 2.411600389s} {test-dmn73d-1/latency-deployment-285-9b75b8cbf-fgvhm 2.411841417s} {test-dmn73d-1/latency-deployment-390-8874dc7dc-tdz9j 2.415459385s} {test-dmn73d-1/latency-deployment-402-5c7b8b5b7-mzhn6 2.416623902s} {test-dmn73d-1/latency-deployment-430-75776cc7d9-nlwvj 2.428745892s} {test-dmn73d-1/latency-deployment-490-6cd947dbf7-ld2f8 2.430168386s} {test-dmn73d-1/latency-deployment-331-678885d6b9-bnxbw 2.431972518s} {test-dmn73d-1/latency-deployment-335-59dc6855cc-6tnwp 2.432508676s} {test-dmn73d-1/latency-deployment-321-b7dbb577f-4cmsd 2.435118345s} {test-dmn73d-1/latency-deployment-46-7d744868dc-r96j7 2.438237114s} {test-dmn73d-1/latency-deployment-36-7588ddd7c9-g2dcl 2.439514673s} {test-dmn73d-1/latency-deployment-161-648f7f57f-cpcch 2.442471768s} {test-dmn73d-1/latency-deployment-446-5d54877575-zkpkl 2.445804216s} {test-dmn73d-1/latency-deployment-8-6f5594d46c-w7wvx 2.446267935s} {test-dmn73d-1/latency-deployment-406-988f8fbc9-7v9p7 2.446780507s} {test-dmn73d-1/latency-deployment-117-746947766f-wqmb8 2.448396495s} {test-dmn73d-1/latency-deployment-212-648556c585-645df 2.455046148s} {test-dmn73d-1/latency-deployment-98-59c486dbd7-9n7d4 2.460598194s} {test-dmn73d-1/latency-deployment-126-8959455b5-2hnvg 2.462394337s} {test-dmn73d-1/latency-deployment-51-6bc4b99b7-ndrrw 2.465737903s} {test-dmn73d-1/latency-deployment-167-684456d6ff-gpchb 2.47487018s} {test-dmn73d-1/latency-deployment-276-686564d685-v8459 2.48111974s} {test-dmn73d-1/latency-deployment-261-d95f45c89-6gf7g 2.481746244s} {test-dmn73d-1/latency-deployment-137-9b5ddc789-hxmrr 2.484313787s} {test-dmn73d-1/latency-deployment-367-564cbcc98c-rxn6w 2.485064277s} {test-dmn73d-1/latency-deployment-491-5d69b44c79-6zjk6 2.485950049s} {test-dmn73d-1/latency-deployment-102-5f59bb5b69-6d4vm 2.494334706s} {test-dmn73d-1/latency-deployment-88-78bfbb578c-gb7gp 2.507744662s} {test-dmn73d-1/latency-deployment-111-787cdb5497-r4z6z 2.510758611s} {test-dmn73d-1/latency-deployment-451-7668f97995-8hvrf 2.514821237s} {test-dmn73d-1/latency-deployment-372-6cb4dd6f59-qdf78 2.518070332s} {test-dmn73d-1/latency-deployment-342-d747b7969-rdnn8 2.518861095s} {test-dmn73d-1/latency-deployment-287-7974bfd769-d2fss 2.519291256s} {test-dmn73d-1/latency-deployment-435-7448d885dc-vxpjw 2.523266988s} {test-dmn73d-1/latency-deployment-171-6c5d4df88f-sbs29 2.527088108s} {test-dmn73d-1/latency-deployment-247-6dc56454f-g278t 2.539572198s} {test-dmn73d-1/latency-deployment-18-548ddc9d95-jjn8p 2.545056459s} {test-dmn73d-1/latency-deployment-476-7d648bd7d9-v82m2 2.545933149s} {test-dmn73d-1/latency-deployment-222-767c47f75-ktvl4 2.546732559s} {test-dmn73d-1/latency-deployment-242-d89584495-tngwv 2.555399153s} {test-dmn73d-1/latency-deployment-202-796d4b5b9f-pp754 2.568629875s} {test-dmn73d-1/latency-deployment-267-658c49b79c-fsk5l 2.572064783s} {test-dmn73d-1/latency-deployment-296-5696f9c5fc-7bx62 2.577720591s} {test-dmn73d-1/latency-deployment-441-689c74b6f7-j5glg 2.58358501s} {test-dmn73d-1/latency-deployment-33-fbc646dc9-m7fdn 2.585829361s} {test-dmn73d-1/latency-deployment-58-7787f774c-pbzf8 2.593217488s} {test-dmn73d-1/latency-deployment-332-5bdcffc54c-w4wqt 2.607096672s} {test-dmn73d-1/latency-deployment-417-7b67bf6fdc-kcj8v 2.608867726s} {test-dmn73d-1/latency-deployment-336-7878f555c5-tv8wd 2.610058652s} {test-dmn73d-1/latency-deployment-322-847cb64bcc-qlcrc 2.610218368s} {test-dmn73d-1/latency-deployment-411-6c4f8674c5-bqqx2 2.610991534s} {test-dmn73d-1/latency-deployment-307-78d67b4c99-sjxgk 2.619843691s} {test-dmn73d-1/latency-deployment-377-74574dff7c-8fsn4 2.62251005s} {test-dmn73d-1/latency-deployment-42-7d7d5f7bb7-2wsl8 2.643180497s} {test-dmn73d-1/latency-deployment-52-5d97b5bb55-nc475 2.648703916s} {test-dmn73d-1/latency-deployment-306-5f865b4689-7zvl6 2.649030119s} {test-dmn73d-1/latency-deployment-257-5fdb84c779-bmqr6 2.653621748s} {test-dmn73d-1/latency-deployment-122-98b696fcc-r6b7j 2.653856013s} {test-dmn73d-1/latency-deployment-416-76f8b4ffc7-rdjwz 2.658414299s} {test-dmn73d-1/latency-deployment-341-bbd9cd767-c7cfv 2.660507876s} {test-dmn73d-1/latency-deployment-182-5bdd56f88f-wzcjh 2.682685651s} {test-dmn73d-1/latency-deployment-112-84f8b5fb47-dnmg4 2.686949892s} {test-dmn73d-1/latency-deployment-142-7fd897fb5c-2tj4g 2.702566288s} {test-dmn73d-1/latency-deployment-427-75ff7b6477-7xqgm 2.708746418s} {test-dmn73d-1/latency-deployment-436-b8977748f-8dbxd 2.723996474s} {test-dmn73d-1/latency-deployment-437-78fbcdd699-trlbp 2.742633643s} {test-dmn73d-1/latency-deployment-83-75f4c648c9-9mvj8 2.752559602s} {test-dmn73d-1/latency-deployment-471-9cd6784f5-zbl5p 2.758295926s} {test-dmn73d-1/latency-deployment-43-7dfc7b65f-nv552 2.778280102s} {test-dmn73d-1/latency-deployment-63-77bc4dc967-hsstd 2.793659013s} {test-dmn73d-1/latency-deployment-162-6fd598d69-xcsm5 2.794254904s} {test-dmn73d-1/latency-deployment-282-566cc9d647-qpmrm 2.799548336s} {test-dmn73d-1/latency-deployment-53-6694d96cc5-ttz4h 2.8286614s} {test-dmn73d-1/latency-deployment-352-59fffddd99-dnbpd 2.828742754s} {test-dmn73d-1/latency-deployment-397-599d7c898f-c7hwz 2.831634998s} {test-dmn73d-1/latency-deployment-78-5cc65ff47f-972fj 2.835663608s} {test-dmn73d-1/latency-deployment-407-b649cb997-vbqjp 2.876003149s} {test-dmn73d-1/latency-deployment-118-8565958587-h2dtz 2.880949991s} {test-dmn73d-1/latency-deployment-128-847db8f58f-l4zs8 2.908539339s} {test-dmn73d-1/latency-deployment-103-64d96c87cf-xfjrf 2.919344708s} {test-dmn73d-1/latency-deployment-2-66bd84bb95-p6f4r 2.928398061s} {test-dmn73d-1/latency-deployment-442-574d85ff4c-n5s9x 2.935971472s} {test-dmn73d-1/latency-deployment-113-549874bf7-nvtbk 2.9413728s}] ... skipping 330 lines ...