Result | FAILURE |
Tests | 4 failed / 47 succeeded |
Started | |
Elapsed | 4h32m |
Revision | master |
job-version | v1.22.0-alpha.0.30+b0abe89ae259d5 |
kubetest-version | |
masterInstanceIDs | 8905061649992460880 |
revision | v1.22.0-alpha.0.30+b0abe89ae259d5 |
error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-e2e-gce-scale-performance-1379479304919846912 --nodes=5000 --prometheus-scrape-node-exporter --provider=gce --report-dir=/logs/artifacts --testconfig=testing/load/config.yaml --testconfig=testing/access-tokens/config.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_gce_container_restarts.yaml --testoverrides=./testing/overrides/5000_nodes.yaml: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\sload\soverall\s\(testing\/load\/config\.yaml\)$'
:0
[measurement call WaitForControlledPodsRunning - WaitForRunningLatencyDeployments error: unknown objects statuses: [test-cifaf9-5/latency-deployment-43: pod store creation error: couldn't initialize *v1.PodStore: namespace(test-cifaf9-5), labelSelector(name=latency-deployment-43): timed out waiting for the condition]
measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:pods Subresource:exec Verb:POST Scope:resource Latency:perc50: 220.108695ms, perc90: 567.340425ms, perc99: 1.237236842s Count:2098 SlowCount:39}; expected perc99 <= 1s]]
:0
from junit.xml
Find store mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\sload\:\s\[step\:\s20\]\sWaiting\sfor\slatency\spods\sto\sbe\sdeleted$'
:0
[measurement call WaitForControlledPodsRunning - WaitForRunningLatencyDeployments error: unknown objects statuses: [test-cifaf9-5/latency-deployment-43: pod store creation error: couldn't initialize *v1.PodStore: namespace(test-cifaf9-5), labelSelector(name=latency-deployment-43): timed out waiting for the condition]]
:0
from junit.xml
Find store mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\sload\:\s\[step\:\s28\]\sCollecting\smeasurements$'
:0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:pods Subresource:exec Verb:POST Scope:resource Latency:perc50: 220.108695ms, perc90: 567.340425ms, perc99: 1.237236842s Count:2098 SlowCount:39}; expected perc99 <= 1s]]
:0
from junit.xml
Filter through log files | View test history on testgrid
Check APIReachability
ClusterLoaderV2 access-tokens overall (testing/access-tokens/config.yaml)
ClusterLoaderV2 access-tokens: [step: 01] Starting measurements
ClusterLoaderV2 access-tokens: [step: 02] Creating ServiceAccounts
ClusterLoaderV2 access-tokens: [step: 03] Creating Tokens
ClusterLoaderV2 access-tokens: [step: 04] Starting measurement for waiting for pods
ClusterLoaderV2 access-tokens: [step: 05] Creating pods
ClusterLoaderV2 access-tokens: [step: 06] Waiting for pods to be running
ClusterLoaderV2 access-tokens: [step: 07] Wait 5min
ClusterLoaderV2 access-tokens: [step: 08] Deleting pods
ClusterLoaderV2 access-tokens: [step: 09] Waiting for pods to be deleted
ClusterLoaderV2 access-tokens: [step: 10] Collecting measurements
ClusterLoaderV2 load: [step: 01] Starting measurements
ClusterLoaderV2 load: [step: 02] Creating k8s services
ClusterLoaderV2 load: [step: 03] Creating PriorityClass for DaemonSets
ClusterLoaderV2 load: [step: 04] Starting measurement for waiting for pods
ClusterLoaderV2 load: [step: 05] Creating objects
ClusterLoaderV2 load: [step: 06] Waiting for pods to be running
ClusterLoaderV2 load: [step: 07] Creating scheduler throughput measurements
ClusterLoaderV2 load: [step: 08] Creating huge services
ClusterLoaderV2 load: [step: 09] Creating scheduler throughput pods
ClusterLoaderV2 load: [step: 10] Waiting for scheduler throughput pods to be created
ClusterLoaderV2 load: [step: 11] Collecting scheduler throughput measurements
ClusterLoaderV2 load: [step: 12] Deleting scheduler throughput pods
ClusterLoaderV2 load: [step: 13] Waiting for scheduler throughput pods to be deleted
ClusterLoaderV2 load: [step: 14] Deleting huge services
ClusterLoaderV2 load: [step: 15] Sleeping after deleting huge services
ClusterLoaderV2 load: [step: 16] Starting latency pod measurements
ClusterLoaderV2 load: [step: 17] Creating latency pods
ClusterLoaderV2 load: [step: 18] Waiting for latency pods to be running
ClusterLoaderV2 load: [step: 19] Deleting latency pods
ClusterLoaderV2 load: [step: 21] Collecting pod startup latency
ClusterLoaderV2 load: [step: 22] Scaling and updating objects
ClusterLoaderV2 load: [step: 23] Waiting for objects to become scaled
ClusterLoaderV2 load: [step: 24] Deleting objects
ClusterLoaderV2 load: [step: 25] Waiting for pods to be deleted
ClusterLoaderV2 load: [step: 26] Deleting PriorityClass for DaemonSets
ClusterLoaderV2 load: [step: 27] Deleting k8s services
Deferred TearDown
DumpClusterLogs
Extract
TearDown
TearDown Previous
Timeout
Up
list nodes
test setup