Recent runs || View in Spyglass
PR | mborsz: Add missing /metrics rbac for metrics-server |
Result | ABORTED |
Tests | 0 failed / 71 succeeded |
Started | |
Elapsed | 51m16s |
Revision | 6063f3ffe12836e81fdb21711d89fab490a08601 |
Refs |
118122 |
ClusterLoaderV2 huge-service overall (testing/huge-service/config.yaml)
ClusterLoaderV2 huge-service: [step: 01] starting measurements [00] - APIResponsivenessPrometheus
ClusterLoaderV2 huge-service: [step: 01] starting measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2 huge-service: [step: 01] starting measurements [02] - TestMetrics
ClusterLoaderV2 huge-service: [step: 01] starting measurements [03] - InClusterNetworkLatency
ClusterLoaderV2 huge-service: [step: 02] Create huge-service
ClusterLoaderV2 huge-service: [step: 03] Creating huge-service measurements [00] - WaitForHugeServiceDeployments
ClusterLoaderV2 huge-service: [step: 04] Creating huge-service pods
ClusterLoaderV2 huge-service: [step: 05] Waiting for huge-service pods to be created [00] - WaitForHugeServiceDeployments
ClusterLoaderV2 huge-service: [step: 06] Updating huge-service pods
ClusterLoaderV2 huge-service: [step: 07] Waiting for huge-service pods to be updated [00] - WaitForHugeServiceDeployments
ClusterLoaderV2 huge-service: [step: 08] Deleting huge-service pods
ClusterLoaderV2 huge-service: [step: 09] Waiting for huge-service pods to be deleted [00] - WaitForHugeServiceDeployments
ClusterLoaderV2 huge-service: [step: 10] Delete huge-service
ClusterLoaderV2 huge-service: [step: 11] gathering measurements [00] - APIResponsivenessPrometheus
ClusterLoaderV2 huge-service: [step: 11] gathering measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2 huge-service: [step: 11] gathering measurements [02] - TestMetrics
ClusterLoaderV2 huge-service: [step: 11] gathering measurements [03] - InClusterNetworkLatency
ClusterLoaderV2 load overall (testing/load/config.yaml)
ClusterLoaderV2 load: [step: 01] starting measurements [00] - APIResponsivenessPrometheus
ClusterLoaderV2 load: [step: 01] starting measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2 load: [step: 01] starting measurements [02] - CreatePhasePodStartupLatency
ClusterLoaderV2 load: [step: 01] starting measurements [03] - InClusterNetworkLatency
ClusterLoaderV2 load: [step: 01] starting measurements [04] - SLOMeasurement
ClusterLoaderV2 load: [step: 01] starting measurements [05] - NetworkProgrammingLatency
ClusterLoaderV2 load: [step: 01] starting measurements [06] - Kube-proxy partial iptables restore failures
ClusterLoaderV2 load: [step: 01] starting measurements [07] - APIAvailability
ClusterLoaderV2 load: [step: 01] starting measurements [08] - Quotas total usage
ClusterLoaderV2 load: [step: 01] starting measurements [09] - TestMetrics
ClusterLoaderV2 load: [step: 02] Creating k8s services
ClusterLoaderV2 load: [step: 03] Creating PriorityClass for DaemonSets
ClusterLoaderV2 load: [step: 04] create objects configmaps and secrets
ClusterLoaderV2 load: [step: 05] Starting measurement for 'create objects' [00] -
ClusterLoaderV2 load: [step: 06] create objects
ClusterLoaderV2 load: [step: 07] Waiting for 'create objects' to be completed [00] -
ClusterLoaderV2 load: [step: 08] Creating scheduler throughput measurements [00] - HighThroughputPodStartupLatency
ClusterLoaderV2 load: [step: 08] Creating scheduler throughput measurements [01] - WaitForSchedulerThroughputDeployments
ClusterLoaderV2 load: [step: 08] Creating scheduler throughput measurements [02] - SchedulingThroughput
ClusterLoaderV2 load: [step: 09] create scheduler throughput pods
ClusterLoaderV2 load: [step: 10] Waiting for scheduler throughput pods to be created [00] - WaitForSchedulerThroughputDeployments
ClusterLoaderV2 load: [step: 11] Collecting scheduler throughput measurements [00] - HighThroughputPodStartupLatency
ClusterLoaderV2 load: [step: 11] Collecting scheduler throughput measurements [01] - SchedulingThroughput
ClusterLoaderV2 load: [step: 12] delete scheduler throughput pods
ClusterLoaderV2 load: [step: 13] Waiting for scheduler throughput pods to be deleted [00] - WaitForSchedulerThroughputDeployments
ClusterLoaderV2 load: [step: 14] Starting latency pod measurements [00] - PodStartupLatency
ClusterLoaderV2 load: [step: 14] Starting latency pod measurements [01] - WaitForRunningLatencyDeployments
ClusterLoaderV2 load: [step: 15] Creating latency pods
ClusterLoaderV2 load: [step: 16] Waiting for latency pods to be running [00] - WaitForRunningLatencyDeployments
ClusterLoaderV2 load: [step: 17] Deleting latency pods
ClusterLoaderV2 load: [step: 18] Waiting for latency pods to be deleted [00] - WaitForRunningLatencyDeployments
ClusterLoaderV2 load: [step: 19] Collecting pod startup latency [00] - PodStartupLatency
ClusterLoaderV2 load: [step: 20] Starting measurement for 'scale and update objects' [00] -
ClusterLoaderV2 load: [step: 21] scale and update objects
ClusterLoaderV2 load: [step: 22] Waiting for 'scale and update objects' to be completed [00] -
ClusterLoaderV2 load: [step: 23] Starting measurement for 'delete objects' [00] -
ClusterLoaderV2 load: [step: 24] delete objects
ClusterLoaderV2 load: [step: 25] Waiting for 'delete objects' to be completed [00] -
ClusterLoaderV2 load: [step: 25] Waiting for 'delete objects' to be completed [01] - WaitForPVCsToBeDeleted
ClusterLoaderV2 load: [step: 26] delete objects configmaps and secrets
ClusterLoaderV2 load: [step: 27] Deleting PriorityClass for DaemonSets
ClusterLoaderV2 load: [step: 28] Deleting k8s services
ClusterLoaderV2 load: [step: 29] gathering measurements [00] - APIResponsivenessPrometheus
ClusterLoaderV2 load: [step: 29] gathering measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2 load: [step: 29] gathering measurements [02] - CreatePhasePodStartupLatency
ClusterLoaderV2 load: [step: 29] gathering measurements [03] - InClusterNetworkLatency
ClusterLoaderV2 load: [step: 29] gathering measurements [04] - SLOMeasurement
ClusterLoaderV2 load: [step: 29] gathering measurements [05] - NetworkProgrammingLatency
ClusterLoaderV2 load: [step: 29] gathering measurements [06] - Kube-proxy partial iptables restore failures
ClusterLoaderV2 load: [step: 29] gathering measurements [07] - APIAvailability
ClusterLoaderV2 load: [step: 29] gathering measurements [08] - Quotas total usage
ClusterLoaderV2 load: [step: 29] gathering measurements [09] - TestMetrics
... skipping 720 lines ... Looking for address 'e2e-118122-95a39-master-ip' Looking for address 'e2e-118122-95a39-master-internal-ip' Using master: e2e-118122-95a39-master (external IP: 34.23.65.245; internal IP: 10.40.0.2) Waiting up to 300 seconds for cluster initialization. This will continually check to see if the API for kubernetes is reachable. This may time out if there was some uncaught error during start up. Kubernetes cluster created. Using image: cos-97-16919-294-15 from project: cos-cloud as master image Using image: cos-97-16919-294-15 from project: cos-cloud as node image Cluster "k8s-infra-e2e-boskos-scale-04_e2e-118122-95a39" set. Using image: cos-97-16919-294-15 from project: cos-cloud as master image ... skipping 264 lines ... Using image: cos-97-16919-294-15 from project: cos-cloud as node image Warning: v1 ComponentStatus is deprecated in v1.19+ Validate output: Using image: cos-97-16919-294-15 from project: cos-cloud as master image Using image: cos-97-16919-294-15 from project: cos-cloud as node image Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-1 Healthy {"health":"true","reason":""} etcd-0 Healthy {"health":"true","reason":""} scheduler Healthy ok controller-manager Healthy ok [0;33mCluster validation encountered some problems, but cluster should be in working order[0m ...ignoring non-fatal errors in validate-cluster Done, listing cluster services: Using image: cos-97-16919-294-15 from project: cos-cloud as master image Using image: cos-97-16919-294-15 from project: cos-cloud as node image [0;32mKubernetes control plane[0m is running at [0;33mhttps://34.23.65.245[0m [0;32mGLBCDefaultBackend[0m is running at [0;33mhttps://34.23.65.245/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy[0m ... skipping 1122 lines ... I0519 08:34:44.165116 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-198): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:34:44.402802 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-92): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:34:44.659935 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-68): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:34:44.861167 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-295): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:34:45.129637 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-159): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:34:45.292244 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-58): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:34:45.429973 86553 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 1, deleted 0, timeout: 0, failed: 0 I0519 08:34:45.430023 86553 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=5.002424788s, operationTimeout=15m0s, ratio=0.01 I0519 08:34:45.430050 86553 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 1/1 DaemonSets are running with all pods I0519 08:34:45.429982 86553 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 3, deleted 0, timeout: 0, failed: 0 I0519 08:34:45.430071 86553 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=5.004729129s, operationTimeout=15m0s, ratio=0.01 I0519 08:34:45.430084 86553 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 3/3 Jobs are running with all pods I0519 08:34:45.623838 86553 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 323, deleted 0, timeout: 0, failed: 0 I0519 08:34:45.623874 86553 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=10.004129443s, operationTimeout=15m0s, ratio=0.01 I0519 08:34:45.623893 86553 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 323/323 Deployments are running with all pods I0519 08:34:54.515711 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-statefulset-0): Pods: 5 out of 5 created, 5 running (5 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:34:54.515760 86553 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 2, deleted 0, timeout: 0, failed: 0 I0519 08:34:54.515779 86553 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=45.008968659s, operationTimeout=15m0s, ratio=0.05 I0519 08:34:54.515794 86553 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 2/2 StatefulSets are running with all pods I0519 08:34:54.515812 86553 simple_test_executor.go:171] Step "[step: 07] Waiting for 'create objects' to be completed" ended I0519 08:34:54.515835 86553 simple_test_executor.go:149] Step "[step: 08] Creating scheduler throughput measurements" started I0519 08:34:54.548444 86553 pod_startup_latency.go:132] PodStartupLatency: labelSelector(group = scheduler-throughput): starting pod startup latency measurement... I0519 08:34:54.548776 86553 wait_for_controlled_pods.go:257] WaitForControlledPodsRunning: starting wait for controlled pods measurement... ... skipping 3 lines ... I0519 08:34:54.688749 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-2), controlledBy(scheduler-throughput-deployment-0): starting with timeout: 19m59.999984358s I0519 08:34:54.689182 86553 simple_test_executor.go:171] Step "[step: 09] create scheduler throughput pods" ended I0519 08:34:54.689209 86553 simple_test_executor.go:149] Step "[step: 10] Waiting for scheduler throughput pods to be created" started I0519 08:34:54.722062 86553 wait_for_controlled_pods.go:288] WaitForControlledPodsRunning: waiting for controlled pods measurement... I0519 08:34:59.697017 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-2), controlledBy(scheduler-throughput-deployment-0): Pods: 578 out of 1000 created, 439 running (439 updated), 137 pending scheduled, 2 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:35:04.715944 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-2), controlledBy(scheduler-throughput-deployment-0): Pods: 1000 out of 1000 created, 927 running (927 updated), 73 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady error dialing prow@34.23.65.245:22: 'ssh: handshake failed: read tcp 10.33.74.5:58178->34.23.65.245:22: read: connection reset by peer', retrying I0519 08:35:09.734128 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-2), controlledBy(scheduler-throughput-deployment-0): Pods: 1000 out of 1000 created, 1000 running (1000 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:35:09.734188 86553 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 1, deleted 0, timeout: 0, failed: 0 I0519 08:35:09.734202 86553 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=15.045417133s, operationTimeout=20m0s, ratio=0.01 I0519 08:35:09.734222 86553 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 1/1 Deployments are running with all pods I0519 08:35:09.734239 86553 simple_test_executor.go:171] Step "[step: 10] Waiting for scheduler throughput pods to be created" ended I0519 08:35:09.734260 86553 simple_test_executor.go:149] Step "[step: 11] Collecting scheduler throughput measurements" started I0519 08:35:09.766890 86553 pod_startup_latency.go:226] PodStartupLatency: labelSelector(group = scheduler-throughput): gathering pod startup latency measurement... I0519 08:35:09.768400 86553 scheduling_throughput.go:154] SchedulingThroughput: gathering data ... skipping 33 lines ... I0519 08:35:10.216139 86553 simple_test_executor.go:149] Step "[step: 13] Waiting for scheduler throughput pods to be deleted" started I0519 08:35:10.216279 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-2), controlledBy(scheduler-throughput-deployment-0): starting with timeout: 19m59.999971671s I0519 08:35:10.248526 86553 wait_for_controlled_pods.go:288] WaitForControlledPodsRunning: waiting for controlled pods measurement... I0519 08:35:15.225711 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-2), controlledBy(scheduler-throughput-deployment-0): Pods: 456 out of 0 created, 456 running (456 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 90 terminating, 0 unknown, 0 runningButNotReady I0519 08:35:20.226562 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-2), controlledBy(scheduler-throughput-deployment-0): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 40 terminating, 0 unknown, 0 runningButNotReady I0519 08:35:25.226809 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-2), controlledBy(scheduler-throughput-deployment-0): Pods: 0 out of 0 created, 0 running (0 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:35:25.226874 86553 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 0, deleted 1, timeout: 0, failed: 0 I0519 08:35:25.226894 86553 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=15.010566849s, operationTimeout=20m0s, ratio=0.01 I0519 08:35:25.226932 86553 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 0/0 Deployments are running with all pods I0519 08:35:25.226960 86553 simple_test_executor.go:171] Step "[step: 13] Waiting for scheduler throughput pods to be deleted" ended I0519 08:35:25.226987 86553 simple_test_executor.go:149] Step "[step: 14] Starting latency pod measurements" started I0519 08:35:25.259874 86553 wait_for_controlled_pods.go:257] WaitForControlledPodsRunning: starting wait for controlled pods measurement... I0519 08:35:25.261056 86553 pod_startup_latency.go:132] PodStartupLatency: labelSelector(group = latency): starting pod startup latency measurement... ... skipping 999 lines ... I0519 08:37:09.485821 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(latency-deployment-494): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:37:09.686236 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(latency-deployment-495): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:37:09.886629 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(latency-deployment-496): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:37:10.085435 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(latency-deployment-497): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:37:10.286823 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(latency-deployment-498): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:37:10.486161 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(latency-deployment-499): Pods: 1 out of 1 created, 1 running (1 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:37:10.986299 86553 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 500, deleted 0, timeout: 0, failed: 0 I0519 08:37:10.986335 86553 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=5.004317555s, operationTimeout=15m0s, ratio=0.01 I0519 08:37:10.986354 86553 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 500/500 Deployments are running with all pods I0519 08:37:10.986380 86553 simple_test_executor.go:171] Step "[step: 16] Waiting for latency pods to be running" ended I0519 08:37:10.986399 86553 simple_test_executor.go:149] Step "[step: 17] Deleting latency pods" started I0519 08:37:11.025430 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(latency-deployment-0): starting with timeout: 14m59.999980098s I0519 08:37:11.123960 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(latency-deployment-1): starting with timeout: 14m59.9999783s ... skipping 495 lines ... I0519 08:38:00.724608 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(latency-deployment-497): starting with timeout: 14m59.999971175s I0519 08:38:00.824897 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(latency-deployment-498): starting with timeout: 14m59.999971693s I0519 08:38:00.924566 86553 simple_test_executor.go:171] Step "[step: 17] Deleting latency pods" ended I0519 08:38:00.924601 86553 simple_test_executor.go:149] Step "[step: 18] Waiting for latency pods to be deleted" started I0519 08:38:00.924928 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(latency-deployment-499): starting with timeout: 14m59.999966783s I0519 08:38:00.957652 86553 wait_for_controlled_pods.go:288] WaitForControlledPodsRunning: waiting for controlled pods measurement... I0519 08:38:06.007205 86553 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 0, deleted 500, timeout: 0, failed: 0 I0519 08:38:06.007256 86553 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=5.001194153s, operationTimeout=15m0s, ratio=0.01 I0519 08:38:06.007276 86553 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 0/0 Deployments are running with all pods I0519 08:38:06.007314 86553 simple_test_executor.go:171] Step "[step: 18] Waiting for latency pods to be deleted" ended I0519 08:38:06.007343 86553 simple_test_executor.go:149] Step "[step: 19] Collecting pod startup latency" started I0519 08:38:06.040274 86553 pod_startup_latency.go:226] PodStartupLatency: labelSelector(group = latency): gathering pod startup latency measurement... I0519 08:38:06.480043 86553 phase_latency.go:141] PodStartupLatency: 100 worst create_to_schedule latencies: [{test-hvn7yb-1/latency-deployment-76-6b4fd96d46-h8b6x 0s} {test-hvn7yb-1/latency-deployment-351-747d97cbfb-2js9p 0s} {test-hvn7yb-1/latency-deployment-373-56bc5fb5c6-8qb9w 0s} {test-hvn7yb-1/latency-deployment-380-66b885c78d-84dv5 0s} {test-hvn7yb-1/latency-deployment-407-6959674fd4-thckw 0s} {test-hvn7yb-1/latency-deployment-225-85544cb78d-gw9nc 0s} {test-hvn7yb-1/latency-deployment-300-554bbcff68-26kp2 0s} {test-hvn7yb-1/latency-deployment-437-5c755db54b-vllx4 0s} {test-hvn7yb-1/latency-deployment-471-58bbdb57bb-t4cm7 0s} {test-hvn7yb-1/latency-deployment-212-859f85d45b-lgftn 0s} {test-hvn7yb-1/latency-deployment-310-85897df59d-6f827 0s} {test-hvn7yb-1/latency-deployment-39-9f77c4b98-q5klv 0s} {test-hvn7yb-1/latency-deployment-119-7bf5cf745b-lcmcd 0s} {test-hvn7yb-1/latency-deployment-105-59c5bcdc5d-8g457 0s} {test-hvn7yb-1/latency-deployment-288-577845ff4-bmfn4 0s} {test-hvn7yb-1/latency-deployment-160-768965d9b4-rm4r9 0s} {test-hvn7yb-1/latency-deployment-206-64799d66c4-ctkd2 0s} {test-hvn7yb-1/latency-deployment-309-6b49f457d6-w8tzd 0s} {test-hvn7yb-1/latency-deployment-427-789c8ddd56-pcqdd 0s} {test-hvn7yb-1/latency-deployment-15-6c4b4db8db-zzvrk 0s} {test-hvn7yb-1/latency-deployment-65-7cf65d8f7d-gtgsl 0s} {test-hvn7yb-1/latency-deployment-71-656d569c9d-5vv8t 0s} {test-hvn7yb-1/latency-deployment-283-6964d56c96-5z99n 0s} {test-hvn7yb-1/latency-deployment-259-85db9fbff4-hmkrz 0s} {test-hvn7yb-1/latency-deployment-312-7fd474b4f4-h74cp 0s} {test-hvn7yb-1/latency-deployment-342-5cfc47c9bd-frxzk 0s} {test-hvn7yb-1/latency-deployment-367-85d7d56cf4-dsksl 0s} {test-hvn7yb-1/latency-deployment-383-5ccb6c4cf4-fx8df 0s} {test-hvn7yb-1/latency-deployment-422-9d756d8cd-fc2cz 0s} {test-hvn7yb-1/latency-deployment-38-8785686f4-xmn6k 0s} {test-hvn7yb-1/latency-deployment-220-5746968794-dmclj 0s} {test-hvn7yb-1/latency-deployment-202-58b44f59f4-vczpb 0s} {test-hvn7yb-1/latency-deployment-228-6f948bdbc4-z46dv 0s} {test-hvn7yb-1/latency-deployment-255-7649659d-xgckf 0s} {test-hvn7yb-1/latency-deployment-363-78c4dfcd7d-j8txp 0s} {test-hvn7yb-1/latency-deployment-90-5588d59c5d-lnbhk 0s} {test-hvn7yb-1/latency-deployment-108-6d4c4495d8-7gpfr 0s} {test-hvn7yb-1/latency-deployment-352-9d7d88fd-flk9f 0s} {test-hvn7yb-1/latency-deployment-399-797649f644-rppp8 0s} {test-hvn7yb-1/latency-deployment-492-6df468696d-49lhr 0s} {test-hvn7yb-1/latency-deployment-81-658c6bfbbb-s7wfs 0s} {test-hvn7yb-1/latency-deployment-325-7649f5547d-6p2c5 0s} {test-hvn7yb-1/latency-deployment-72-656f67fcd-4p2d8 0s} {test-hvn7yb-1/latency-deployment-110-db5594c8b-hf9fk 0s} {test-hvn7yb-1/latency-deployment-318-564dd98b6-v8t22 0s} {test-hvn7yb-1/latency-deployment-372-588fb9cb8b-2wmln 0s} {test-hvn7yb-1/latency-deployment-7-79b74764bd-xd2wl 0s} {test-hvn7yb-1/latency-deployment-54-5bf65b5c78-hcnqh 0s} {test-hvn7yb-1/latency-deployment-246-65669f4946-kwn5b 0s} {test-hvn7yb-1/latency-deployment-265-8475555496-4phvf 0s} {test-hvn7yb-1/latency-deployment-186-77b9cb6c44-tr2fx 0s} {test-hvn7yb-1/latency-deployment-268-77c5576b66-7sj8w 0s} {test-hvn7yb-1/latency-deployment-338-95d589cdb-7smsd 0s} {test-hvn7yb-1/latency-deployment-346-674866d764-hx8kp 0s} {test-hvn7yb-1/latency-deployment-458-79c887f6d-m9b7t 0s} {test-hvn7yb-1/latency-deployment-61-cb5746bfd-qk9rn 0s} {test-hvn7yb-1/latency-deployment-83-5d5cd6f75d-2mnr4 0s} {test-hvn7yb-1/latency-deployment-250-57d469cf6-969x4 0s} {test-hvn7yb-1/latency-deployment-53-bc686d6c4-tfs77 0s} {test-hvn7yb-1/latency-deployment-92-745c685d74-gz5jq 0s} {test-hvn7yb-1/latency-deployment-229-8b9dccdd-f2ffz 0s} {test-hvn7yb-1/latency-deployment-353-66c44c685b-z55h4 0s} {test-hvn7yb-1/latency-deployment-490-76d5c44b84-v67nc 0s} {test-hvn7yb-1/latency-deployment-188-b5f987cb4-l42td 0s} {test-hvn7yb-1/latency-deployment-215-647d8678c4-zkn6k 0s} {test-hvn7yb-1/latency-deployment-70-759bbc97c4-j9hkp 0s} {test-hvn7yb-1/latency-deployment-260-66fdc9496b-v6lns 0s} {test-hvn7yb-1/latency-deployment-400-5cc4498c5d-v5mpb 0s} {test-hvn7yb-1/latency-deployment-57-6d949b9754-88qj5 0s} {test-hvn7yb-1/latency-deployment-127-7485b65498-449f6 0s} {test-hvn7yb-1/latency-deployment-218-7976595fd8-sbp9t 0s} {test-hvn7yb-1/latency-deployment-8-556ffd65d6-fwqq7 0s} {test-hvn7yb-1/latency-deployment-150-5d8f9cb8c8-b7bv8 0s} {test-hvn7yb-1/latency-deployment-241-55967664d8-xmn94 0s} {test-hvn7yb-1/latency-deployment-243-645dcdd6fb-zgwtb 0s} {test-hvn7yb-1/latency-deployment-63-699995b586-g245r 0s} {test-hvn7yb-1/latency-deployment-464-797f68dfc8-nxlmj 0s} {test-hvn7yb-1/latency-deployment-287-6697c54444-5wqhd 0s} {test-hvn7yb-1/latency-deployment-36-747b48bdfd-djpmr 0s} {test-hvn7yb-1/latency-deployment-163-5bb498dbc4-698bz 0s} {test-hvn7yb-1/latency-deployment-441-57c47895bd-jxbfc 0s} {test-hvn7yb-1/latency-deployment-446-8649fb47c8-rnh2r 0s} {test-hvn7yb-1/latency-deployment-238-5f8f789fb-5mdn2 0s} {test-hvn7yb-1/latency-deployment-282-859bf85b4b-8r88v 0s} {test-hvn7yb-1/latency-deployment-303-97978668b-tcfvs 0s} {test-hvn7yb-1/latency-deployment-426-74cd4f899d-xdhn5 0s} {test-hvn7yb-1/latency-deployment-468-77bccdf9d6-5nm6b 0s} {test-hvn7yb-1/latency-deployment-23-67bcfdb474-zctsc 0s} {test-hvn7yb-1/latency-deployment-59-66b86f7986-vm6sh 0s} {test-hvn7yb-1/latency-deployment-95-75cc454c54-vt284 0s} {test-hvn7yb-1/latency-deployment-494-764b8d7bd-8h6kn 0s} {test-hvn7yb-1/latency-deployment-106-5c7bf8b6bd-84bth 0s} {test-hvn7yb-1/latency-deployment-270-58f6bd499b-xb6qb 0s} {test-hvn7yb-1/latency-deployment-354-5dd56d6bcd-qkhbz 0s} {test-hvn7yb-1/latency-deployment-232-56cc6d9bcb-4824h 0s} {test-hvn7yb-1/latency-deployment-240-79f596487d-2k49s 0s} {test-hvn7yb-1/latency-deployment-337-696f894ffb-vfbtk 1s} {test-hvn7yb-1/latency-deployment-332-79f549c754-gsstz 1s} {test-hvn7yb-1/latency-deployment-322-d55cf5f7b-v4phj 1s} {test-hvn7yb-1/latency-deployment-327-569cfd464b-h5c8r 1s}] ... skipping 609 lines ... I0519 08:38:48.276681 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-263): Pods: 3 out of 3 created, 3 running (3 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:38:48.327974 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-147): Pods: 6 out of 6 created, 6 running (6 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:38:48.431180 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-23): Pods: 4 out of 4 created, 4 running (4 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:38:48.490525 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(medium-deployment-20): Pods: 44 out of 44 created, 44 running (44 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:38:48.492927 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-281): Pods: 7 out of 7 created, 7 running (7 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:38:48.606975 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-155): Pods: 3 out of 3 created, 3 running (3 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:38:48.677020 86553 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 1, deleted 0, timeout: 0, failed: 0 I0519 08:38:48.677054 86553 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=25.078797068s, operationTimeout=15m0s, ratio=0.03 I0519 08:38:48.677074 86553 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 1/1 DaemonSets are running with all pods I0519 08:38:48.678217 86553 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 3, deleted 0, timeout: 0, failed: 0 I0519 08:38:48.678250 86553 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=5.005581971s, operationTimeout=15m0s, ratio=0.01 I0519 08:38:48.678265 86553 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 3/3 Jobs are running with all pods I0519 08:38:48.828960 86553 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 323, deleted 0, timeout: 0, failed: 0 I0519 08:38:48.828998 86553 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=5.011053857s, operationTimeout=15m0s, ratio=0.01 I0519 08:38:48.829020 86553 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 323/323 Deployments are running with all pods I0519 08:38:53.866524 86553 wait_for_pods.go:122] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-statefulset-0): Pods: 6 out of 6 created, 6 running (6 updated), 0 pending scheduled, 0 not scheduled, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0519 08:38:53.866588 86553 wait_for_controlled_pods.go:365] WaitForControlledPodsRunning: running 2, deleted 0, timeout: 0, failed: 0 I0519 08:38:53.866609 86553 wait_for_controlled_pods.go:370] WaitForControlledPodsRunning: maxDuration=20.003432008s, operationTimeout=15m0s, ratio=0.02 I0519 08:38:53.866631 86553 wait_for_controlled_pods.go:384] WaitForControlledPodsRunning: 2/2 StatefulSets are running with all pods I0519 08:38:53.866659 86553 simple_test_executor.go:171] Step "[step: 22] Waiting for 'scale and update objects' to be completed" ended I0519 08:38:53.866681 86553 simple_test_executor.go:149] Step "[step: 23] Starting measurement for 'delete objects'" started I0519 08:38:53.899710 86553 wait_for_controlled_pods.go:254] WaitForControlledPodsRunning: wait for controlled pods measurement already running I0519 08:38:53.899823 86553 wait_for_controlled_pods.go:254] WaitForControlledPodsRunning: wait for controlled pods measurement already running ... skipping 12 lines ... I0519 08:38:57.579490 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-186): starting with timeout: 14m59.99997963s I0519 08:38:58.296165 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-292): starting with timeout: 14m59.999970542s I0519 08:38:58.341997 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(medium-deployment-0): starting with timeout: 14m59.99997179s I0519 08:38:58.442760 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-293): starting with timeout: 14m59.999980848s I0519 08:38:58.633727 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-75): starting with timeout: 14m59.999980531s I0519 08:38:58.912495 86553 wait_for_pods.go:64] WaitForControlledPodsRunning: namespace(test-hvn7yb-1), controlledBy(small-deployment-62): starting with timeout: 14m59.999976953s {"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2023-05-19T08:38:59Z"} ++ early_exit_handler ++ '[' -n 189 ']' ++ kill -TERM 189 ++ cleanup_dind ++ [[ true == \t\r\u\e ]] ++ echo 'Cleaning up after docker' ... skipping 4 lines ...