This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 9 failed / 43 succeeded
Started2019-07-16 23:15
Elapsed4h6m
Revision
Buildergke-prow-ssd-pool-1a225945-577q
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/594b0c9f-aaa9-42de-bf92-2c3b39f7ba6e/targets/test'}}
pod80bf127c-a81f-11e9-a9b1-1a2b998c5cd0
resultstorehttps://source.cloud.google.com/results/invocations/594b0c9f-aaa9-42de-bf92-2c3b39f7ba6e/targets/test
infra-commit2c53c2d7e
job-versionv1.13.9-beta.0.1+e1be42a61576dc
pod80bf127c-a81f-11e9-a9b1-1a2b998c5cd0
repok8s.io/kubernetes
repo-commite1be42a61576dcb85ea6b56de18fd627e4c55bfc
repos{u'k8s.io/kubernetes': u'release-1.13'}
revisionv1.13.9-beta.0.1+e1be42a61576dc

Test Failures


Node Tests 4h5m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=ubuntu-image-validation --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeAlphaFeature:.+\]" --test_args=--feature-gates=DynamicKubeletConfig=true --test-timeout=5h0m0s --images=ubuntu-gke-1804-d1809-0-v20190715-test --image-project=ubuntu-os-gke-cloud-devel: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000 2m22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Container\sManager\sMisc\s\[Serial\]\sValidate\sOOM\sscore\sadjustments\s\[NodeFeature\:OOMScoreAdj\]\sonce\sthe\snode\sis\ssetup\s\spod\sinfra\scontainers\soom\-score\-adj\sshould\sbe\s\-998\sand\sbest\seffort\scontainer\'s\sshould\sbe\s1000$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:98
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc00034b800>: {
        s: "expected only one serve_hostname process; found 0",
    }
to be nil
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:150
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20190715-test_01.xml

Filter through log files | View test history on testgrid


[k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality. 1m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Device\sPlugin\s\[Feature\:DevicePluginProbe\]\[NodeFeature\:DevicePluginProbe\]\[Serial\]\sDevicePlugin\sVerifies\sthe\sKubelet\sdevice\splugin\sfunctionality\.$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/device_plugin.go:69
Timed out after 30.001s.
Expected
    <bool>: false
to be true
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/device_plugin.go:96
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20190715-test_01.xml

Filter through log files | View test history on testgrid


[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container 15m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=GarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sMany\sPods\swith\sMany\sRestarting\sContainers\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:172
Expected error:
    <*errors.errorString | 0xc0000837f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20190715-test_01.xml

Filter through log files | View test history on testgrid


[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container 15m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=GarbageCollect\s\[Serial\]\[NodeFeature\:GarbageCollect\]\sGarbage\sCollection\sTest\:\sMany\sRestarting\sContainers\sShould\seventually\sgarbage\scollect\scontainers\swhen\swe\sexceed\sthe\snumber\sof\sdead\scontainers\sper\scontainer$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/garbage_collector_test.go:172
Expected error:
    <*errors.errorString | 0xc0000837f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20190715-test_01.xml

Filter through log files | View test history on testgrid


[k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 6m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=LocalStorageEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:413
Expected error:
    <*errors.errorString | 0xc0000837f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20190715-test_01.xml

Filter through log files | View test history on testgrid


[k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 5m34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=LocalStorageSoftEviction\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:480
Expected error:
    <*errors.errorString | 0xc000562e40>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20190715-test_01.xml

Find ran mentions in log files | View test history on testgrid


[k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods 4m40s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=PriorityLocalStorageEvictionOrdering\s\[Slow\]\s\[Serial\]\s\[Disruptive\]\[NodeFeature\:Eviction\]\swhen\swe\srun\scontainers\sthat\sshould\scause\sDiskPressure\s\sshould\seventually\sevict\sall\sof\sthe\scorrect\spods$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:413
Expected error:
    <*errors.errorString | 0xc000562e40>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20190715-test_01.xml

Find ran mentions in log files | View test history on testgrid


[sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads run each pre-defined workload 18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\sNode\sPerformance\sTesting\s\[Serial\]\s\[Slow\]\sRun\snode\sperformance\stesting\swith\spre\-defined\sworkloads\srun\seach\spre\-defined\sworkload$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:61
Expected error:
    <*errors.errorString | 0xc000562e40>: {
        s: "pod ran to completion",
    }
    pod ran to completion
not to have occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:110
				
				Click to see stdout/stderrfrom junit_ubuntu-gke-1804-d1809-0-v20190715-test_01.xml

Find ran mentions in log files | View test history on testgrid


Show 43 Passed Tests

Show 241 Skipped Tests