This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 64 failed / 611 succeeded
Started2019-11-16 04:21
Elapsed2h27m
Revision
Buildergke-prow-ssd-pool-1a225945-49gz
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/a298d5af-0805-4b5c-b5f2-80355ed8d38e/targets/test'}}
pod72760897-0828-11ea-be88-5a2ed842773b
resultstorehttps://source.cloud.google.com/results/invocations/a298d5af-0805-4b5c-b5f2-80355ed8d38e/targets/test
infra-commite6eb7488e
job-versionv1.16.4-beta.0.3+c0f31a4ef6304d-dirty
pod72760897-0828-11ea-be88-5a2ed842773b
repok8s.io/kubernetes
repo-commitc0f31a4ef6304d653f387455e7ed1723e7bb5385
repos{u'k8s.io/kubernetes': u'release-1.16', u'sigs.k8s.io/cloud-provider-azure': u'master'}
revisionv1.16.4-beta.0.3+c0f31a4ef6304d-dirty

Test Failures


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] 1m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\shave\san\sterminated\sreason\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 16 05:42:30.460: Timed out after 60.000s.
Expected
    <*errors.errorString | 0xc002aca9b0>: {
        s: "expected state to be terminated. Got pod status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 05:41:30 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 05:41:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsebd6c2ab4-5231-47ce-952b-2824bb4a8f4f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 05:41:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsebd6c2ab4-5231-47ce-952b-2824bb4a8f4f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 05:41:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.5 PodIP: PodIPs:[] StartTime:2019-11-16 05:41:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:bin-falsebd6c2ab4-5231-47ce-952b-2824bb4a8f4f State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:docker.io/library/busybox:1.29 ImageID: ContainerID: Started:0xc00240225a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
to be nil
test/e2e/common/kubelet.go:123
				
				Click to see stdout/stderrfrom junit_07.xml

Find status mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] 1m33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\shave\san\sterminated\sreason\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 16 05:40:56.999: Timed out after 60.001s.
Expected
    <*errors.errorString | 0xc0022ec790>: {
        s: "expected state to be terminated. Got pod status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 05:39:57 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 05:39:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-false654b4492-55a0-4ee5-8b99-541f1d40e65c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 05:39:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-false654b4492-55a0-4ee5-8b99-541f1d40e65c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-16 05:39:56 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.5 PodIP: PodIPs:[] StartTime:2019-11-16 05:39:57 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:bin-false654b4492-55a0-4ee5-8b99-541f1d40e65c State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:docker.io/library/busybox:1.29 ImageID: ContainerID: Started:0xc0007a8e9a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
to be nil
test/e2e/common/kubelet.go:123
				
				Click to see stdout/stderrfrom junit_07.xml

Find status mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] 19m48s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sallow\sactiveDeadlineSeconds\sto\sbe\supdated\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 16 05:42:44.747: Unexpected error:
    <*errors.errorString | 0xc001bd6050>: {
        s: "Gave up after waiting 5m0s for pod \"pod-update-activedeadlineseconds-2880b5ee-44b2-46a9-8fa5-7e3c1a3afc15\" to be \"terminated due to deadline exceeded\"",
    }
    Gave up after waiting 5m0s for pod "pod-update-activedeadlineseconds-2880b5ee-44b2-46a9-8fa5-7e3c1a3afc15" to be "terminated due to deadline exceeded"
occurred
test/e2e/common/pods.go:441
				
				Click to see stdout/stderrfrom junit_20.xml

Find pod-update-activedeadlineseconds-2880b5ee-44b2-46a9-8fa5-7e3c1a3afc15 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] 20m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sallow\sactiveDeadlineSeconds\sto\sbe\supdated\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 16 06:02:34.844: Unexpected error:
    <*errors.errorString | 0xc000815ca0>: {
        s: "Gave up after waiting 5m0s for pod \"pod-update-activedeadlineseconds-65089b83-1b62-40a7-baea-8e823ccdcb14\" to be \"terminated due to deadline exceeded\"",
    }
    Gave up after waiting 5m0s for pod "pod-update-activedeadlineseconds-65089b83-1b62-40a7-baea-8e823ccdcb14" to be "terminated due to deadline exceeded"
occurred
test/e2e/common/pods.go:441
				
				Click to see stdout/stderrfrom junit_20.xml

Find pod-update-activedeadlineseconds-65089b83-1b62-40a7-baea-8e823ccdcb14 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate] 6m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\ssupport\spod\sreadiness\sgates\s\[NodeFeature\:PodReadinessGate\]$'
test/e2e/common/pods.go:777
Nov 16 05:04:56.627: Unexpected error:
    <*errors.errorString | 0xc000198070>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/common/pods.go:811
				
				Click to see stdout/stderrfrom junit_21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] 11m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sshould\snot\sbe\sready\sbefore\sinitial\sdelay\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 16 05:39:17.855: Couldn't delete ns: "container-probe-748": namespace container-probe-748 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace container-probe-748 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] 14m33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sshould\snot\sbe\sready\sbefore\sinitial\sdelay\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 16 05:53:50.936: Couldn't delete ns: "container-probe-9392": namespace container-probe-9392 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace container-probe-9392 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] 2m49s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sDelete\sGrace\sPeriod\sshould\sbe\ssubmitted\sand\sremoved\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 16 05:26:36.559: kubelet never observed the termination notice
Unexpected error:
    <*errors.errorString | 0xc0000d5090>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/node/pods.go:163
				
				Click to see stdout/stderrfrom junit_02.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it 14m19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDisruptionController\sshould\sblock\san\seviction\suntil\sthe\sPDB\sis\supdated\sto\sallow\sit$'
test/e2e/framework/framework.go:152
Nov 16 06:06:06.541: Couldn't delete ns: "disruption-2624": namespace disruption-2624 was not deleted with limit: timed out waiting for the condition, pods remaining: 3 (&errors.errorString{s:"namespace disruption-2624 was not deleted with limit: timed out waiting for the condition, pods remaining: 3"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it 14m35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDisruptionController\sshould\sblock\san\seviction\suntil\sthe\sPDB\sis\supdated\sto\sallow\sit$'
test/e2e/framework/framework.go:152
Nov 16 05:51:46.867: Couldn't delete ns: "disruption-2131": namespace disruption-2131 was not deleted with limit: timed out waiting for the condition, pods remaining: 3 (&errors.errorString{s:"namespace disruption-2131 was not deleted with limit: timed out waiting for the condition, pods remaining: 3"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] 14m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sReplicaSet\sshould\sadopt\smatching\spods\son\screation\sand\srelease\sno\slonger\smatching\spods\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 16 05:43:43.248: Couldn't delete ns: "replicaset-3203": namespace replicaset-3203 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace replicaset-3203 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] 14m23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sReplicaSet\sshould\sadopt\smatching\spods\son\screation\sand\srelease\sno\slonger\smatching\spods\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 16 05:58:06.942: Couldn't delete ns: "replicaset-6111": namespace replicaset-6111 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace replicaset-6111 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects 5m33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sPort\sforwarding\sWith\sa\sserver\slistening\son\s0\.0\.0\.0\sthat\sexpects\sNO\sclient\srequest\sshould\ssupport\sa\sclient\sthat\sconnects\,\ssends\sDATA\,\sand\sdisconnects$'
test/e2e/kubectl/portforward.go:452
Nov 16 05:50:50.790: Pod did not start running: timed out waiting for the condition
test/e2e/kubectl/portforward.go:213
				
				Click to see stdout/stderrfrom junit_15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects 15m55s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sPort\sforwarding\sWith\sa\sserver\slistening\son\s0\.0\.0\.0\sthat\sexpects\sNO\sclient\srequest\sshould\ssupport\sa\sclient\sthat\sconnects\,\ssends\sDATA\,\sand\sdisconnects$'
test/e2e/framework/framework.go:152
Nov 16 05:45:50.139: Couldn't delete ns: "port-forwarding-3039": namespace port-forwarding-3039 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace port-forwarding-3039 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] 11m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sKubectl\sexpose\sshould\screate\sservices\sfor\src\s\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 16 05:51:35.317: Couldn't delete ns: "kubectl-2157": namespace kubectl-2157 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace kubectl-2157 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_27.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] 14m31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sKubectl\sexpose\sshould\screate\sservices\sfor\src\s\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 16 06:06:06.691: Couldn't delete ns: "kubectl-3417": namespace kubectl-3417 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace kubectl-3417 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_27.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] 11m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sUpdate\sDemo\sshould\screate\sand\sstop\sa\sreplication\scontroller\s\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 16 05:39:11.128: Couldn't delete ns: "kubectl-6619": namespace kubectl-6619 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace kubectl-6619 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_23.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] 14m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sUpdate\sDemo\sshould\screate\sand\sstop\sa\sreplication\scontroller\s\s\[Conformance\]$'
test/e2e/framework/framework.go:152
Nov 16 05:53:50.391: Couldn't delete ns: "kubectl-2712": namespace kubectl-2712 was not deleted with limit: timed out waiting for the condition, pods remaining: 2 (&errors.errorString{s:"namespace kubectl-2712 was not deleted with limit: timed out waiting for the condition, pods remaining: 2"})
test/e2e/framework/framework.go:336
				
				Click to see stdout/stderrfrom junit_23.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it 5m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\(allowExpansion\)\]\svolume\-expand\sshould\sresize\svolume\swhen\sPVC\sis\sedited\swhile\spod\sis\susing\sit$'
test/e2e/storage/testsuites/volume_expand.go:218
Nov 16 05:03:07.390: Unexpected error:
    <*errors.errorString | 0xc0045fb120>: {
        s: "PersistentVolumeClaims [csi-hostpath77wkf] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [csi-hostpath77wkf] not all in phase Bound within 5m0s
occurred
test/e2e/storage/testsuites/base.go:366
				
				Click to see stdout/stderrfrom junit_02.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data 25m40s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(block\svolmode\)\]\svolumes\sshould\sstore\sdata$'
test/e2e/storage/testsuites/volumes.go:146
Nov 16 06:12:29.707: Failed to create injector pod: timed out waiting for the condition
test/e2e/framework/volume/fixtures.go:565