This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 64 failed / 617 succeeded
Started2019-11-15 20:20
Elapsed2h20m
Revision
Buildergke-prow-ssd-pool-1a225945-p1cl
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/48e9f296-3f2e-4e8f-b9a2-6355f3793394/targets/test'}}
pod3e09e1f8-07e5-11ea-b8cf-7a96d45e07b5
resultstorehttps://source.cloud.google.com/results/invocations/48e9f296-3f2e-4e8f-b9a2-6355f3793394/targets/test
infra-commit8d288a842
job-versionv1.16.4-beta.0.1+d70a3ca08fe72a-dirty
pod3e09e1f8-07e5-11ea-b8cf-7a96d45e07b5
repok8s.io/kubernetes
repo-commitd70a3ca08fe72ad8dd0b2d72cf032474ab2ce2a9
repos{u'k8s.io/kubernetes': u'release-1.16', u'sigs.k8s.io/cloud-provider-azure': u'master'}
revisionv1.16.4-beta.0.1+d70a3ca08fe72a-dirty

Test Failures


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] 14m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\sprestop\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 15 21:23:20.347: Couldn't delete ns: "container-lifecycle-hook-8669": namespace container-lifecycle-hook-8669 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace container-lifecycle-hook-8669 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_30.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] 11m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sthat\sfails\sshould\snever\sbe\sready\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 15 21:18:56.809: Couldn't delete ns: "container-probe-4298": namespace container-probe-4298 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace container-probe-4298 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction 13m47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDisruptionController\sevictions\:\senough\spods\,\sreplicaSet\,\spercentage\s\=\>\sshould\sallow\san\seviction$'
test/e2e/framework/framework.go:152
Nov 15 21:44:11.334: Couldn't delete ns: "disruption-9348": namespace disruption-9348 was not deleted with limit: timed out waiting for the condition, pods remaining: 5 (&errors.errorString{s:"namespace disruption-9348 was not deleted with limit: timed out waiting for the condition, pods remaining: 5"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction 15m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDisruptionController\sevictions\:\smaxUnavailable\sallow\ssingle\seviction\,\spercentage\s\=\>\sshould\sallow\san\seviction$'
test/e2e/framework/framework.go:152
Nov 15 21:48:08.944: Couldn't delete ns: "disruption-2717": namespace disruption-2717 was not deleted with limit: timed out waiting for the condition, pods remaining: 5 (&errors.errorString{s:"namespace disruption-2717 was not deleted with limit: timed out waiting for the condition, pods remaining: 5"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] DisruptionController should update PodDisruptionBudget status 11m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDisruptionController\sshould\supdate\sPodDisruptionBudget\sstatus$'
test/e2e/framework/framework.go:152
Nov 15 21:22:36.982: Couldn't delete ns: "disruption-2472": namespace disruption-2472 was not deleted with limit: timed out waiting for the condition, pods remaining: 2 (&errors.errorString{s:"namespace disruption-2472 was not deleted with limit: timed out waiting for the condition, pods remaining: 2"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_04.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] 24m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\scanary\supdates\sand\sphased\srolling\supdates\sof\stemplate\smodifications\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 15 22:11:03.235: Failed waiting for state update: timed out waiting for the condition
test/e2e/framework/statefulset/wait.go:129
				
				Click to see stdout/stderrfrom junit_10.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] 26m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\scanary\supdates\sand\sphased\srolling\supdates\sof\stemplate\smodifications\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 15 21:49:36.887: Failed waiting for state update: timed out waiting for the condition
test/e2e/framework/statefulset/wait.go:129
				
				Click to see stdout/stderrfrom junit_10.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] 7m19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sUpdate\sDemo\sshould\sdo\sa\srolling\supdate\sof\sa\sreplication\scontroller\s\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 15 21:33:10.693: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
test/e2e/framework/rc_util.go:260
				
				Click to see stdout/stderrfrom junit_22.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] 16m51s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sUpdate\sDemo\sshould\sdo\sa\srolling\supdate\sof\sa\sreplication\scontroller\s\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 15 21:52:16.645: Couldn't delete ns: "kubectl-6668": namespace kubectl-6668 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace kubectl-6668 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_22.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] 14m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sUpdate\sDemo\sshould\sscale\sa\sreplication\scontroller\s\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 15 21:19:12.741: Couldn't delete ns: "kubectl-7900": namespace kubectl-7900 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace kubectl-7900 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_27.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] Services should have session affinity work for NodePort service 2m52s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sServices\sshould\shave\ssession\saffinity\swork\sfor\sNodePort\sservice$'
test/e2e/network/service.go:1813
Nov 15 20:52:44.956: Connection to 10.248.0.4:30845 timed out or not enough responses.
test/e2e/framework/service/affinity_checker.go:55
				
				Click to see stdout/stderrfrom junit_06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property 5m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\svolume\-expand\sshould\snot\sallow\sexpansion\sof\spvcs\swithout\sAllowVolumeExpansion\sproperty$'
test/e2e/storage/testsuites/volume_expand.go:139
Nov 15 21:02:07.313: Unexpected error:
    <*errors.errorString | 0xc00072ee20>: {
        s: "PersistentVolumeClaims [csi-hostpathqjtxr] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [csi-hostpathqjtxr] not all in phase Bound within 5m0s
occurred
test/e2e/storage/testsuites/base.go:366
				
				Click to see stdout/stderrfrom junit_12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property 15m36s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\svolume\-expand\sshould\snot\sallow\sexpansion\sof\spvcs\swithout\sAllowVolumeExpansion\sproperty$'
test/e2e/framework/framework.go:152
Nov 15 21:18:35.601: Couldn't delete ns: "volume-expand-5296": namespace volume-expand-5296 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace volume-expand-5296 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data 15m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\svolumes\sshould\sstore\sdata$'
test/e2e/storage/testsuites/volumes.go:146
Nov 15 21:13:43.926: Unexpected error:
    <*errors.errorString | 0xc00192e0c0>: {
        s: "PersistentVolumeClaims [csi-hostpathbt4q8] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [csi-hostpathbt4q8] not all in phase Bound within 5m0s
occurred
test/e2e/storage/testsuites/base.go:366
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source 16m49s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\sprovisioning\sshould\sprovision\sstorage\swith\spvc\sdata\ssource$'
test/e2e/storage/testsuites/provisioning.go:207
Nov 15 21:10:17.475: Unexpected error:
    <*errors.errorString | 0xc0024db2e0>: {
        s: "Gave up after waiting 15m0s for pod \"pvc-datasource-writer-8zpp6\" to be \"success or failure\"",
    }
    Gave up after waiting 15m0s for pod "pvc-datasource-writer-8zpp6" to be "success or failure"
occurred
test/e2e/storage/testsuites/provisioning.go:537