This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 3 failed / 20 succeeded
Started2019-07-09 22:01
Elapsed43m2s
Revision
Buildergke-prow-ssd-pool-1a225945-ftg2
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/beb9c050-b5a4-4873-946d-d914f280d0d2/targets/test'}}
podefb040c5-a294-11e9-8035-fa22f53aaf37
resultstorehttps://source.cloud.google.com/results/invocations/beb9c050-b5a4-4873-946d-d914f280d0d2/targets/test
infra-commit5a60bc453
job-versionv1.15.1-beta.0.43+7e199be93b39d9
master_os_imagecos-beta-73-11647-64-0
node_os_imagegke-1134-gke-rc5-cos-69-10895-138-0-v190320-pre-nvda-gpu
podefb040c5-a294-11e9-8035-fa22f53aaf37
revisionv1.15.1-beta.0.43+7e199be93b39d9

Test Failures


GPU cluster downgrade nvidia-gpu-upgrade [sig-node] [sig-scheduling] 21m34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=GPU\scluster\sdowngrade\snvidia\-gpu\-upgrade\s\[sig\-node\]\s\[sig\-scheduling\]$'
wait for pod "cuda-add-48h4t" to success
Expected success, but got an error:
    <*errors.errorString | 0xc0009a12b0>: {
        s: "pod \"cuda-add-48h4t\" failed with reason: \"UnexpectedAdmissionError\", message: \"Pod Update plugin resources failed due to requested number of devices unavailable for nvidia.com/gpu. Requested: 1, Available: 0, which is unexpected.\"",
    }
    pod "cuda-add-48h4t" failed with reason: "UnexpectedAdmissionError", message: "Pod Update plugin resources failed due to requested number of devices unavailable for nvidia.com/gpu. Requested: 1, Available: 0, which is unexpected."

k8s.io/kubernetes/test/e2e/framework.(*PodClient).WaitForSuccess(0xc001545800, 0xc0009018c0, 0xe, 0x45d964b800)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:229 +0x265
k8s.io/kubernetes/test/e2e/upgrades.(*NvidiaGPUUpgradeTest).verifyJobPodSuccess(0x8a14c20, 0xc000c4c140)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/nvidia-gpu.go:107 +0x3e5
k8s.io/kubernetes/test/e2e/upgrades.(*NvidiaGPUUpgradeTest).Test(0x8a14c20, 0xc000c4c140, 0xc001d37800, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/nvidia-gpu.go:53 +0x91
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc00194d5c0, 0xc001e8f7e0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:454 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*chaosmonkey).Do.func1(0xc001e8f7e0, 0xc001ee2c70)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:89 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:86 +0xa7
				from junit_upgradeupgrades.xml

Find cuda-add-48h4t mentions in log files | View test history on testgrid


UpgradeTest 23m44s

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:GPUClusterDowngrade\] --upgrade-target=ci/k8s-stable1 --upgrade-image=gci --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


[sig-cluster-lifecycle] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade] 23m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-cluster\-lifecycle\]\sgpu\sUpgrade\s\[Feature\:GPUUpgrade\]\scluster\sdowngrade\sshould\sbe\sable\sto\srun\sgpu\spod\safter\sdowngrade\s\[Feature\:GPUClusterDowngrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:315
wait for pod "cuda-add-48h4t" to success
Expected success, but got an error:
    <*errors.errorString | 0xc0009a12b0>: {
        s: "pod \"cuda-add-48h4t\" failed with reason: \"UnexpectedAdmissionError\", message: \"Pod Update plugin resources failed due to requested number of devices unavailable for nvidia.com/gpu. Requested: 1, Available: 0, which is unexpected.\"",
    }
    pod "cuda-add-48h4t" failed with reason: "UnexpectedAdmissionError", message: "Pod Update plugin resources failed due to requested number of devices unavailable for nvidia.com/gpu. Requested: 1, Available: 0, which is unexpected."
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:229
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Find cuda-add-48h4t mentions in log files | View test history on testgrid


Show 20 Passed Tests

Show 7996 Skipped Tests