Recent runs || View in Spyglass
PR | mzaian: etcd: Update to version 3.5.7 |
Result | ABORTED |
Tests | 0 failed / 71 succeeded |
Started | |
Elapsed | 48m4s |
Revision | 90570b7595a712dd4a1bcfaf0b7bd93c2ee00fbb |
Refs |
115310 |
ClusterLoaderV2 huge-service overall (testing/huge-service/config.yaml)
ClusterLoaderV2 huge-service: [step: 01] starting measurements [00] - APIResponsivenessPrometheus
ClusterLoaderV2 huge-service: [step: 01] starting measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2 huge-service: [step: 01] starting measurements [02] - TestMetrics
ClusterLoaderV2 huge-service: [step: 01] starting measurements [03] - InClusterNetworkLatency
ClusterLoaderV2 huge-service: [step: 02] Create huge-service
ClusterLoaderV2 huge-service: [step: 03] Creating huge-service measurements [00] - WaitForHugeServiceDeployments
ClusterLoaderV2 huge-service: [step: 04] Creating huge-service pods
ClusterLoaderV2 huge-service: [step: 05] Waiting for huge-service pods to be created [00] - WaitForHugeServiceDeployments
ClusterLoaderV2 huge-service: [step: 06] Updating huge-service pods
ClusterLoaderV2 huge-service: [step: 07] Waiting for huge-service pods to be updated [00] - WaitForHugeServiceDeployments
ClusterLoaderV2 huge-service: [step: 08] Deleting huge-service pods
ClusterLoaderV2 huge-service: [step: 09] Waiting for huge-service pods to be deleted [00] - WaitForHugeServiceDeployments
ClusterLoaderV2 huge-service: [step: 10] Delete huge-service
ClusterLoaderV2 huge-service: [step: 11] gathering measurements [00] - APIResponsivenessPrometheus
ClusterLoaderV2 huge-service: [step: 11] gathering measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2 huge-service: [step: 11] gathering measurements [02] - TestMetrics
ClusterLoaderV2 huge-service: [step: 11] gathering measurements [03] - InClusterNetworkLatency
ClusterLoaderV2 load overall (testing/load/config.yaml)
ClusterLoaderV2 load: [step: 01] starting measurements [00] - APIResponsivenessPrometheus
ClusterLoaderV2 load: [step: 01] starting measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2 load: [step: 01] starting measurements [02] - CreatePhasePodStartupLatency
ClusterLoaderV2 load: [step: 01] starting measurements [03] - InClusterNetworkLatency
ClusterLoaderV2 load: [step: 01] starting measurements [04] - SLOMeasurement
ClusterLoaderV2 load: [step: 01] starting measurements [05] - NetworkProgrammingLatency
ClusterLoaderV2 load: [step: 01] starting measurements [06] - Kube-proxy partial iptables restore failures
ClusterLoaderV2 load: [step: 01] starting measurements [07] - APIAvailability
ClusterLoaderV2 load: [step: 01] starting measurements [08] - Quotas total usage
ClusterLoaderV2 load: [step: 01] starting measurements [09] - TestMetrics
ClusterLoaderV2 load: [step: 02] Creating k8s services
ClusterLoaderV2 load: [step: 03] Creating PriorityClass for DaemonSets
ClusterLoaderV2 load: [step: 04] create objects configmaps and secrets
ClusterLoaderV2 load: [step: 05] Starting measurement for 'create objects' [00] -
ClusterLoaderV2 load: [step: 06] create objects
ClusterLoaderV2 load: [step: 07] Waiting for 'create objects' to be completed [00] -
ClusterLoaderV2 load: [step: 08] Creating scheduler throughput measurements [00] - HighThroughputPodStartupLatency
ClusterLoaderV2 load: [step: 08] Creating scheduler throughput measurements [01] - WaitForSchedulerThroughputDeployments
ClusterLoaderV2 load: [step: 08] Creating scheduler throughput measurements [02] - SchedulingThroughput
ClusterLoaderV2 load: [step: 09] create scheduler throughput pods
ClusterLoaderV2 load: [step: 10] Waiting for scheduler throughput pods to be created [00] - WaitForSchedulerThroughputDeployments
ClusterLoaderV2 load: [step: 11] Collecting scheduler throughput measurements [00] - HighThroughputPodStartupLatency
ClusterLoaderV2 load: [step: 11] Collecting scheduler throughput measurements [01] - SchedulingThroughput
ClusterLoaderV2 load: [step: 12] delete scheduler throughput pods
ClusterLoaderV2 load: [step: 13] Waiting for scheduler throughput pods to be deleted [00] - WaitForSchedulerThroughputDeployments
ClusterLoaderV2 load: [step: 14] Starting latency pod measurements [00] - PodStartupLatency
ClusterLoaderV2 load: [step: 14] Starting latency pod measurements [01] - WaitForRunningLatencyDeployments
ClusterLoaderV2 load: [step: 15] Creating latency pods
ClusterLoaderV2 load: [step: 16] Waiting for latency pods to be running [00] - WaitForRunningLatencyDeployments
ClusterLoaderV2 load: [step: 17] Deleting latency pods
ClusterLoaderV2 load: [step: 18] Waiting for latency pods to be deleted [00] - WaitForRunningLatencyDeployments
ClusterLoaderV2 load: [step: 19] Collecting pod startup latency [00] - PodStartupLatency
ClusterLoaderV2 load: [step: 20] Starting measurement for 'scale and update objects' [00] -
ClusterLoaderV2 load: [step: 21] scale and update objects
ClusterLoaderV2 load: [step: 22] Waiting for 'scale and update objects' to be completed [00] -
ClusterLoaderV2 load: [step: 23] Starting measurement for 'delete objects' [00] -
ClusterLoaderV2 load: [step: 24] delete objects
ClusterLoaderV2 load: [step: 25] Waiting for 'delete objects' to be completed [00] -
ClusterLoaderV2 load: [step: 25] Waiting for 'delete objects' to be completed [01] - WaitForPVCsToBeDeleted
ClusterLoaderV2 load: [step: 26] delete objects configmaps and secrets
ClusterLoaderV2 load: [step: 27] Deleting PriorityClass for DaemonSets
ClusterLoaderV2 load: [step: 28] Deleting k8s services
ClusterLoaderV2 load: [step: 29] gathering measurements [00] - APIResponsivenessPrometheus
ClusterLoaderV2 load: [step: 29] gathering measurements [01] - APIResponsivenessPrometheusSimple
ClusterLoaderV2 load: [step: 29] gathering measurements [02] - CreatePhasePodStartupLatency
ClusterLoaderV2 load: [step: 29] gathering measurements [03] - InClusterNetworkLatency
ClusterLoaderV2 load: [step: 29] gathering measurements [04] - SLOMeasurement
ClusterLoaderV2 load: [step: 29] gathering measurements [05] - NetworkProgrammingLatency
ClusterLoaderV2 load: [step: 29] gathering measurements [06] - Kube-proxy partial iptables restore failures
ClusterLoaderV2 load: [step: 29] gathering measurements [07] - APIAvailability
ClusterLoaderV2 load: [step: 29] gathering measurements [08] - Quotas total usage
ClusterLoaderV2 load: [step: 29] gathering measurements [09] - TestMetrics
... skipping 698 lines ... Looking for address 'e2e-115310-95a39-master-ip' Looking for address 'e2e-115310-95a39-master-internal-ip' Using master: e2e-115310-95a39-master (external IP: 34.138.219.133; internal IP: 10.40.0.2) Waiting up to 300 seconds for cluster initialization. This will continually check to see if the API for kubernetes is reachable. This may time out if there was some uncaught error during start up. ..........................................Kubernetes cluster created. Cluster "k8s-infra-e2e-boskos-scale-10_e2e-115310-95a39" set. User "k8s-infra-e2e-boskos-scale-10_e2e-115310-95a39" set. Context "k8s-infra-e2e-boskos-scale-10_e2e-115310-95a39" created. Switched to context "k8s-infra-e2e-boskos-scale-10_e2e-115310-95a39". ... skipping 231 lines ... e2e-115310-95a39-minion-group-zx5f Ready <none> 38s v1.27.0-alpha.1.71+9b161f03f23480 e2e-115310-95a39-minion-group-zzkf Ready <none> 35s v1.27.0-alpha.1.71+9b161f03f23480 e2e-115310-95a39-minion-heapster Ready <none> 38s v1.27.0-alpha.1.71+9b161f03f23480 Warning: v1 ComponentStatus is deprecated in v1.19+ Validate output: Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-1 Healthy {"health":"true","reason":""} etcd-0 Healthy {"health":"true","reason":""} controller-manager Healthy ok scheduler Healthy ok [0;33mCluster validation encountered some problems, but cluster should be in working order[0m ...ignoring non-fatal errors in validate-cluster Done, listing cluster services: [0;32mKubernetes control plane[0m is running at [0;33mhttps://34.138.219.133[0m [0;32mGLBCDefaultBackend[0m is running at [0;33mhttps://34.138.219.133/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy[0m [0;32mCoreDNS[0m is running at [0;33mhttps://34.138.219.133/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy[0m [0;32mMetrics-server[0m is running at [0;33mhttps://34.138.219.133/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy[0m ... skipping 514 lines ...