This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 4 failed / 50 succeeded
Started2021-08-29 17:03
Elapsed5h33m
Revisionmaster
job-versionv1.23.0-alpha.1.151+edb0a72cff0e43
kubetest-version
masterInstanceIDs5214737436164620338
revisionv1.23.0-alpha.1.151+edb0a72cff0e43

Test Failures


ClusterLoaderV2 load overall (testing/load/config.yaml) 3h56m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\sload\soverall\s\(testing\/load\/config\.yaml\)$'
:0
[measurement call WaitForControlledPodsRunning -  error: [measurement call WaitForControlledPodsRunning - WaitForRunningJobs error: 12 objects timed out: Jobs: test-33rkcv-33/big-job-0, test-33rkcv-29/big-job-0, test-33rkcv-24/big-job-0, test-33rkcv-20/big-job-0, test-33rkcv-1/big-job-0, test-33rkcv-23/big-job-0, test-33rkcv-39/big-job-0, test-33rkcv-43/big-job-0, test-33rkcv-2/big-job-0, test-33rkcv-48/big-job-0, test-33rkcv-21/big-job-0, test-33rkcv-16/big-job-0]
measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:csinodes Subresource: Verb:GET Scope:resource Latency:perc50: 31.560283ms, perc90: 160ms, perc99: 1.13875s Count:178 SlowCount:4}; expected perc99 <= 1s got: &{Resource:nodes Subresource: Verb:POST Scope:resource Latency:perc50: 31.785714ms, perc90: 318.333333ms, perc99: 1.13875s Count:89 SlowCount:2}; expected perc99 <= 1s]]
:0
				from junit.xml

Filter through log files


ClusterLoaderV2 load: [step: 24] Waiting for 'scale and update objects' to be completed 29m40s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\sload\:\s\[step\:\s24\]\sWaiting\sfor\s\'scale\sand\supdate\sobjects\'\sto\sbe\scompleted$'
:0
[measurement call WaitForControlledPodsRunning -  error: [measurement call WaitForControlledPodsRunning - WaitForRunningJobs error: 12 objects timed out: Jobs: test-33rkcv-33/big-job-0, test-33rkcv-29/big-job-0, test-33rkcv-24/big-job-0, test-33rkcv-20/big-job-0, test-33rkcv-1/big-job-0, test-33rkcv-23/big-job-0, test-33rkcv-39/big-job-0, test-33rkcv-43/big-job-0, test-33rkcv-2/big-job-0, test-33rkcv-48/big-job-0, test-33rkcv-21/big-job-0, test-33rkcv-16/big-job-0]]
:0
				from junit.xml

Filter through log files


ClusterLoaderV2 load: [step: 31] gathering measurements 10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\sload\:\s\[step\:\s31\]\sgathering\smeasurements$'
:0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:csinodes Subresource: Verb:GET Scope:resource Latency:perc50: 31.560283ms, perc90: 160ms, perc99: 1.13875s Count:178 SlowCount:4}; expected perc99 <= 1s got: &{Resource:nodes Subresource: Verb:POST Scope:resource Latency:perc50: 31.785714ms, perc90: 318.333333ms, perc99: 1.13875s Count:89 SlowCount:2}; expected perc99 <= 1s]]
:0
				from junit.xml

Filter through log files


kubetest ClusterLoaderV2 4h9m

error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-e2e-gce-scale-performance-canary-1432025635173175296 --nodes=5000 --prometheus-scrape-node-exporter --provider=gce --report-dir=/logs/artifacts --testconfig=testing/load/config.yaml --testconfig=testing/access-tokens/config.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_gce_container_restarts.yaml --testoverrides=./testing/overrides/5000_nodes.yaml: exit status 1
				from junit_runner.xml

Filter through log files


Show 50 Passed Tests