This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 9 failed / 69 succeeded
Started2021-08-08 10:03
Elapsed8h57m
Revisionmaster
job-versionv1.23.0-alpha.0.321+e2e3c2d01c1f8f
kubetest-version
revisionv1.23.0-alpha.0.321+e2e3c2d01c1f8f

Test Failures


ClusterLoaderV2 pod-affinity overall (testing/density/config.yaml) 1h54m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\spod\-affinity\soverall\s\(testing\/density\/config\.yaml\)$'
:0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:nodes Subresource:status Verb:PATCH Scope:resource Latency:perc50: 42.957522ms, perc90: 634.991945ms, perc99: 1.417650837s Count:229611 SlowCount:9204}; expected perc99 <= 1s]]
:0
				from junit.xml

Filter through log files


ClusterLoaderV2 pod-affinity: [step: 14] Collecting measurements 9.35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\spod\-affinity\:\s\[step\:\s14\]\sCollecting\smeasurements$'
:0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:nodes Subresource:status Verb:PATCH Scope:resource Latency:perc50: 42.957522ms, perc90: 634.991945ms, perc99: 1.417650837s Count:229611 SlowCount:9204}; expected perc99 <= 1s]]
:0
				from junit.xml

Filter through log files


ClusterLoaderV2 pod-anti-affinity overall (testing/density/config.yaml) 1h58m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\spod\-anti\-affinity\soverall\s\(testing\/density\/config\.yaml\)$'
:0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:nodes Subresource:status Verb:PATCH Scope:resource Latency:perc50: 39.240271ms, perc90: 631.89204ms, perc99: 1.476465284s Count:237388 SlowCount:11672}; expected perc99 <= 1s]]
:0
				from junit.xml

Filter through log files


ClusterLoaderV2 pod-anti-affinity: [step: 14] Collecting measurements 9.19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\spod\-anti\-affinity\:\s\[step\:\s14\]\sCollecting\smeasurements$'
:0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:nodes Subresource:status Verb:PATCH Scope:resource Latency:perc50: 39.240271ms, perc90: 631.89204ms, perc99: 1.476465284s Count:237388 SlowCount:11672}; expected perc99 <= 1s]]
:0
				from junit.xml

Filter through log files


ClusterLoaderV2 pod-topology-spread overall (testing/density/config.yaml) 1h49m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\spod\-topology\-spread\soverall\s\(testing\/density\/config\.yaml\)$'
:0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:nodes Subresource:status Verb:PATCH Scope:resource Latency:perc50: 34.996819ms, perc90: 371.948624ms, perc99: 1.343564999s Count:213624 SlowCount:5401}; expected perc99 <= 1s]]
:0
				from junit.xml

Filter through log files


ClusterLoaderV2 pod-topology-spread: [step: 14] Collecting measurements 9.06s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\spod\-topology\-spread\:\s\[step\:\s14\]\sCollecting\smeasurements$'
:0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:nodes Subresource:status Verb:PATCH Scope:resource Latency:perc50: 34.996819ms, perc90: 371.948624ms, perc99: 1.343564999s Count:213624 SlowCount:5401}; expected perc99 <= 1s]]
:0
				from junit.xml

Filter through log files


ClusterLoaderV2 vanilla overall (testing/density/config.yaml) 1h49m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\svanilla\soverall\s\(testing\/density\/config\.yaml\)$'
:0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:nodes Subresource:status Verb:PATCH Scope:resource Latency:perc50: 36.98335ms, perc90: 522.934362ms, perc99: 1.475518487s Count:218209 SlowCount:9005}; expected perc99 <= 1s]]
:0
				from junit.xml

Filter through log files


ClusterLoaderV2 vanilla: [step: 14] Collecting measurements 8.44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ClusterLoaderV2\svanilla\:\s\[step\:\s14\]\sCollecting\smeasurements$'
:0
[measurement call APIResponsivenessPrometheus - APIResponsivenessPrometheusSimple error: top latency metric: there should be no high-latency requests, but: [got: &{Resource:nodes Subresource:status Verb:PATCH Scope:resource Latency:perc50: 36.98335ms, perc90: 522.934362ms, perc99: 1.475518487s Count:218209 SlowCount:9005}; expected perc99 <= 1s]]
:0
				from junit.xml

Filter through log files


kubetest ClusterLoaderV2 7h36m

error during /home/prow/go/src/k8s.io/perf-tests/run-e2e.sh cluster-loader2 --experimental-gcp-snapshot-prometheus-disk=true --experimental-prometheus-disk-snapshot-name=ci-kubernetes-kubemark-gce-scale-scheduler-canary-1424309736387383296 --nodes=5000 --provider=kubemark --report-dir=/logs/artifacts --testsuite=testing/density/scheduler-suite.yaml --testoverrides=./testing/experiments/enable_restart_count_check.yaml --testoverrides=./testing/experiments/ignore_known_kubemark_container_restarts.yaml --testoverrides=./testing/overrides/kubemark_5000_nodes.yaml: exit status 1
				from junit_runner.xml

Filter through log files


Show 69 Passed Tests