This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 7 failed / 151 succeeded
Started2019-07-12 04:14
Elapsed10h32m
Revision
Buildergke-prow-ssd-pool-1a225945-08s9
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/34990d50-64a6-4ad1-9355-f7e835f788a6/targets/test'}}
pod5f45430e-a45b-11e9-8217-96c43017ab5b
resultstorehttps://source.cloud.google.com/results/invocations/34990d50-64a6-4ad1-9355-f7e835f788a6/targets/test
infra-commit04c2406cc
job-versionv1.13.9-beta.0.1+e1be42a61576dc
master_os_imagecos-stable-65-10323-64-0
node_os_imagecos-u-73-11647-231-0
pod5f45430e-a45b-11e9-8217-96c43017ab5b
revisionv1.13.9-beta.0.1+e1be42a61576dc

Test Failures


Test 10h12m

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files


[k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking resource tracking for 0 pods per node 20m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\sKubelet\s\[Serial\]\s\[Slow\]\s\[k8s\.io\]\s\[sig\-node\]\sregular\sresource\susage\stracking\sresource\stracking\sfor\s0\spods\sper\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:263
Jul 12 05:13:25.284: Memory usage exceeding limits:
 node test-f59a9b46eb-minion-group-87n1:
 container "runtime": expected RSS memory (MB) < 131072000; got 142299136
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:155
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


[sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart 11m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-apps\]\sDaemonRestart\s\[Disruptive\]\sKubelet\sshould\snot\srestart\scontainers\sacross\srestart$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 12 05:27:15.045: Couldn't delete ns: "e2e-tests-daemonrestart-ssk56": namespace e2e-tests-daemonrestart-ssk56 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace e2e-tests-daemonrestart-ssk56 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:345
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


[sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works 1h1m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-scheduling\]\sSchedulerPreemption\s\[Serial\]\svalidates\sbasic\spreemption\sworks$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:50
Expected error:
    <*errors.errorString | 0xc001538000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:65
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


[sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod 1h1m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-scheduling\]\sSchedulerPreemption\s\[Serial\]\svalidates\slower\spriority\spod\spreemption\sby\scritical\spod$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:50
Expected error:
    <*errors.errorString | 0xc001cb0410>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:65
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


[sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation 1h1m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-scheduling\]\sSchedulerPriorities\s\[Serial\]\sPod\sshould\savoid\snodes\sthat\shave\savoidPod\sannotation$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:70
Expected error:
    <*errors.errorString | 0xc00220c360>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:79
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


[sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate 1h1m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-scheduling\]\sSchedulerPriorities\s\[Serial\]\sPod\sshould\sbe\spreferably\sscheduled\sto\snodes\spod\scan\stolerate$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:70
Expected error:
    <*errors.errorString | 0xc001d520a0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:79
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


Show 151 Passed Tests

Show 2022 Skipped Tests