This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 7 failed / 105 succeeded
Started2019-07-19 23:53
Elapsed9h7m
Revision
Buildergke-prow-ssd-pool-1a225945-l2xq
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/dcd36982-e096-4490-8c3b-de3db401a043/targets/test'}}
pod5a20a066-aa80-11e9-b82b-365474bd0c86
resultstorehttps://source.cloud.google.com/results/invocations/dcd36982-e096-4490-8c3b-de3db401a043/targets/test
infra-commita7f2c5488
job-versionv1.12.11-beta.0.1+5f799a487b70ae
master_os_image
node_os_imagecos-69-10895-299-0
pod5a20a066-aa80-11e9-b82b-365474bd0c86
revisionv1.12.11-beta.0.1+5f799a487b70ae

Test Failures


Test 8h50m

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --num-nodes=3 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files


[k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking resource tracking for 100 pods per node 24m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\sKubelet\s\[Serial\]\s\[Slow\]\s\[k8s\.io\]\s\[sig\-node\]\sregular\sresource\susage\stracking\sresource\stracking\sfor\s100\spods\sper\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:263
Jul 20 06:12:23.335: Memory usage exceeding limits:
 node gke-test-a84bf75ce1-default-pool-71e0e920-fnlg:
 container "runtime": expected RSS memory (MB) < 367001600; got 373673984
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:155
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


[sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart 11m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-apps\]\sDaemonRestart\s\[Disruptive\]\sKubelet\sshould\snot\srestart\scontainers\sacross\srestart$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
Jul 20 00:15:36.369: Couldn't delete ns: "e2e-tests-daemonrestart-shskd": namespace e2e-tests-daemonrestart-shskd was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace e2e-tests-daemonrestart-shskd was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:343
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


[sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial] 2m22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-network\]\sDNS\sconfigMap\snameserver\sForward\sPTR\slookup\sshould\sforward\sPTR\srecords\slookup\sto\supstream\snameserver\s\[Slow\]\[Serial\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:419
Jul 20 04:00:04.564: dig result did not match: []string{"dns.google."} after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:102
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] 1h0m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-scheduling\]\sSchedulerPredicates\s\[Serial\]\svalidates\sthat\sNodeSelector\sis\srespected\sif\snot\smatching\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78
Expected error:
    <*errors.errorString | 0xc4207926c0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


[sig-scheduling] SchedulerPreemption [Serial] validates pod anti-affinity works in preemption 1h1m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-scheduling\]\sSchedulerPreemption\s\[Serial\]\svalidates\spod\santi\-affinity\sworks\sin\spreemption$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:70
Expected error:
    <*errors.errorString | 0xc420fc4040>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:83
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


[sig-scheduling] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation 1h1m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-scheduling\]\sSchedulerPriorities\s\[Serial\]\sPod\sshould\savoid\sto\sschedule\sto\snode\sthat\shave\savoidPod\sannotation$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:70
Expected error:
    <*errors.errorString | 0xc420a521b0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:79
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


Show 105 Passed Tests

Show 1919 Skipped Tests