This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 5 failed / 19 succeeded
Started2019-07-10 11:01
Elapsed50m27s
Revision
Buildergke-prow-ssd-pool-1a225945-z5n1
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/18f19c7c-02bd-4465-9bd6-2ffd6e9ddf62/targets/test'}}
pode6d369d3-a301-11e9-8035-fa22f53aaf37
resultstorehttps://source.cloud.google.com/results/invocations/18f19c7c-02bd-4465-9bd6-2ffd6e9ddf62/targets/test
infra-commit0e3897d9c
job-versionv1.16.0-alpha.0.2053+a29243775a5b82
master_os_imagecos-beta-73-11647-64-0
node_os_imagegke-1134-gke-rc5-cos-69-10895-138-0-v190320-pre-nvda-gpu
pode6d369d3-a301-11e9-8035-fa22f53aaf37
revisionv1.16.0-alpha.0.2053+a29243775a5b82

Test Failures


GPU cluster downgrade nvidia-gpu-upgrade [sig-node] [sig-scheduling] 21m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=GPU\scluster\sdowngrade\snvidia\-gpu\-upgrade\s\[sig\-node\]\s\[sig\-scheduling\]$'
wait for pod "cuda-add-95zvg" to success
Expected success, but got an error:
    <*errors.errorString | 0xc002363ac0>: {
        s: "pod \"cuda-add-95zvg\" failed with reason: \"UnexpectedAdmissionError\", message: \"Pod Update plugin resources failed due to requested number of devices unavailable for nvidia.com/gpu. Requested: 1, Available: 0, which is unexpected.\"",
    }
    pod "cuda-add-95zvg" failed with reason: "UnexpectedAdmissionError", message: "Pod Update plugin resources failed due to requested number of devices unavailable for nvidia.com/gpu. Requested: 1, Available: 0, which is unexpected."

k8s.io/kubernetes/test/e2e/framework.(*PodClient).WaitForSuccess(0xc001faef00, 0xc00239c830, 0xe, 0x45d964b800)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:229 +0x265
k8s.io/kubernetes/test/e2e/upgrades.(*NvidiaGPUUpgradeTest).verifyJobPodSuccess(0x8a14c20, 0xc000935a40)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/nvidia-gpu.go:107 +0x3e5
k8s.io/kubernetes/test/e2e/upgrades.(*NvidiaGPUUpgradeTest).Test(0x8a14c20, 0xc000935a40, 0xc0027c3a40, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/nvidia-gpu.go:53 +0x91
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc0013b9940, 0xc0023d6480)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:454 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*chaosmonkey).Do.func1(0xc0023d6480, 0xc0023fa220)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:89 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:86 +0xa7
				from junit_upgradeupgrades.xml

Find cuda-add-95zvg mentions in log files | View test history on testgrid


Kubernetes e2e suite BeforeSuite 10m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\sBeforeSuite$'
_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:72
Jul 10 11:40:30.362: Error waiting for all pods to be running and ready: 1 / 27 pods in namespace "kube-system" are NOT in RUNNING and READY state in 10m0s
POD                                    NODE                            PHASE   GRACE CONDITIONS
event-exporter-v0.2.4-65d8d98768-2cn6m bootstrap-e2e-minion-group-q4wc Running       [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-07-10 11:28:09 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-07-10 11:31:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [event-exporter prometheus-to-sd-exporter]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-07-10 11:31:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [event-exporter prometheus-to-sd-exporter]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-07-10 11:28:09 +0000 UTC Reason: Message:}]

_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:126
				from junit_01.xml

Filter through log files | View test history on testgrid


Test 10m12s

error during ./hack/ginkgo-e2e.sh --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8 --ginkgo.skip=\[.+\]|Initializers|Dashboard --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


UpgradeTest 22m25s

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:GPUClusterDowngrade\] --upgrade-target=ci/k8s-stable1 --upgrade-image=gci --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


[sig-cluster-lifecycle] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade] 22m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-cluster\-lifecycle\]\sgpu\sUpgrade\s\[Feature\:GPUUpgrade\]\scluster\sdowngrade\sshould\sbe\sable\sto\srun\sgpu\spod\safter\sdowngrade\s\[Feature\:GPUClusterDowngrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:315
wait for pod "cuda-add-95zvg" to success
Expected success, but got an error:
    <*errors.errorString | 0xc002363ac0>: {
        s: "pod \"cuda-add-95zvg\" failed with reason: \"UnexpectedAdmissionError\", message: \"Pod Update plugin resources failed due to requested number of devices unavailable for nvidia.com/gpu. Requested: 1, Available: 0, which is unexpected.\"",
    }
    pod "cuda-add-95zvg" failed with reason: "UnexpectedAdmissionError", message: "Pod Update plugin resources failed due to requested number of devices unavailable for nvidia.com/gpu. Requested: 1, Available: 0, which is unexpected."
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:229
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Find cuda-add-95zvg mentions in log files | View test history on testgrid


Show 19 Passed Tests

Show 3585 Skipped Tests