This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 8 failed / 45 succeeded
Started2019-11-18 10:17
Elapsed4h27m
Revision
Buildergke-prow-ssd-pool-1a225945-8w66
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/a5453323-3d1c-45e0-9548-91de4cbf18d4/targets/test'}}
poda41114a2-09ec-11ea-be88-5a2ed842773b
resultstorehttps://source.cloud.google.com/results/invocations/a5453323-3d1c-45e0-9548-91de4cbf18d4/targets/test
infra-commit5d65c2d6f
job-versionv1.15.7-beta.0.1+54260e2be0c03f
poda41114a2-09ec-11ea-be88-5a2ed842773b
repok8s.io/kubernetes
repo-commit54260e2be0c03f933e0fe42cc8e5fcd22a04bc39
repos{u'k8s.io/kubernetes': u'release-1.15'}
revisionv1.15.7-beta.0.1+54260e2be0c03f

Test Failures


E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container 15m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sGarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sMany\sRestarting\sContainers\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:170
Unexpected error:
    <*errors.errorString | 0xc000216500>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20191113_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods 6m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageCapacityIsolationEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\s\[Feature\:LocalStorageCapacityIsolation\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sevictions\sdue\sto\spod\slocal\sstorage\sviolations\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:455
Unexpected error:
    <*errors.errorString | 0xc000216500>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20191113_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods 6m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sLocalStorageCapacityIsolationQuotaMonitoring\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\s\[Feature\:LocalStorageCapacityIsolationQuota\]\[NodeFeature\:LSCIQuotaMonitoring\]\swhen\swe\srun\scontainers\sthat\sshould\scause\suse\squotas\sfor\sLSCI\smonitoring\s\(quotas\senabled\:\sfalse\)\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:455
Unexpected error:
    <*errors.errorString | 0xc000216500>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20191113_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods 6m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sMemoryAllocatableEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sMemoryPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:455
Unexpected error:
    <*errors.errorString | 0xc000216500>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20191113_01.xml

Filter through log files | View test history on testgrid


E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 5m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sPriorityLocalStorageEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:523
Unexpected error:
    <*errors.errorString | 0xc0006571c0>: {
        s: "pod ran to completion",
    }
    pod ran to completion
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20191113_01.xml

Find ran mentions in log files | View test history on testgrid


E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec 5m26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sHugePages\s\[Serial\]\s\[Feature\:HugePages\]\[NodeFeature\:HugePages\]\sWith\sconfig\supdated\swith\shugepages\sfeature\senabled\sshould\sassign\shugepages\sas\sexpected\sbased\son\sthe\sPod\sspec$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/hugepages_test.go:163
Unexpected error:
    <*errors.errorString | 0xc001376d10>: {
        s: "Gave up after waiting 5m0s for pod \"pod0998c95c-315a-412a-b935-1b314f2c76d6\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "pod0998c95c-315a-412a-b935-1b314f2c76d6" to be "success or failure"
occurred
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/hugepages_test.go:191
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20191113_01.xml

Find pod0998c95c-315a-412a-b935-1b314f2c76d6 mentions in log files | View test history on testgrid


E2eNode Suite [sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod 5m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-node\]\sPodPidsLimit\s\[Serial\]\s\[Feature\:SupportPodPidsLimit\]\[NodeFeature\:SupportPodPidsLimit\]\sWith\sconfig\supdated\swith\spids\sfeature\senabled\sshould\sset\spids\.max\sfor\sPod$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/pids_test.go:107
Unexpected error:
    <*errors.errorString | 0xc0007c5220>: {
        s: "Gave up after waiting 5m0s for pod \"pod1dc17df6-0b6c-4ed2-9ad3-900a8c52d627\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "pod1dc17df6-0b6c-4ed2-9ad3-900a8c52d627" to be "success or failure"
occurred
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/pids_test.go:134
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20191113_01.xml

Find pod1dc17df6-0b6c-4ed2-9ad3-900a8c52d627 mentions in log files | View test history on testgrid


Node Tests 4h25m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=ubuntu-image-validation --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeAlphaFeature:.+\]" --test_args=--feature-gates=DynamicKubeletConfig=true --test-timeout=5h0m0s --images=ubuntu-gke-1804-d1809-0-v20191113 --image-project=ubuntu-os-gke-cloud-devel: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 45 Passed Tests

Show 264 Skipped Tests