This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 9 failed / 634 succeeded
Started2019-11-19 12:31
Elapsed1h20m
Revision
Buildergke-prow-ssd-pool-1a225945-m4r6
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/bfd370dd-f150-4d93-bda2-df7b38920150/targets/test'}}
pod6382a2b4-0ac8-11ea-be88-5a2ed842773b
resultstorehttps://source.cloud.google.com/results/invocations/bfd370dd-f150-4d93-bda2-df7b38920150/targets/test
infra-commit00c5af8fa
job-versionv1.16.4-beta.0.3+c0f31a4ef6304d-dirty
pod6382a2b4-0ac8-11ea-be88-5a2ed842773b
repok8s.io/kubernetes
repo-commitc0f31a4ef6304d653f387455e7ed1723e7bb5385
repos{u'k8s.io/kubernetes': u'release-1.16', u'sigs.k8s.io/cloud-provider-azure': u'master'}
revisionv1.16.4-beta.0.3+c0f31a4ef6304d-dirty

Test Failures


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] 1m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\shave\san\sterminated\sreason\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 19 13:12:04.161: Timed out after 60.000s.
Expected
    <*errors.errorString | 0xc0024f4250>: {
        s: "expected state to be terminated. Got pod status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 13:11:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 13:11:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsee24a809f-04fb-4ca1-856a-b19818a7e9e6]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 13:11:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsee24a809f-04fb-4ca1-856a-b19818a7e9e6]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 13:11:04 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.5 PodIP: PodIPs:[] StartTime:2019-11-19 13:11:04 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:bin-falsee24a809f-04fb-4ca1-856a-b19818a7e9e6 State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:docker.io/library/busybox:1.29 ImageID: ContainerID: Started:0xc00291e14a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
to be nil
test/e2e/common/kubelet.go:123
				
				Click to see stdout/stderrfrom junit_15.xml

Find status mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] 1m33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\shave\san\sterminated\sreason\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:698
Nov 19 13:13:19.198: Timed out after 60.000s.
Expected
    <*errors.errorString | 0xc0020d6b60>: {
        s: "expected state to be terminated. Got pod status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 13:12:19 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 13:12:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-false86b71a17-de72-400c-80ca-001a0b5e0ff3]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 13:12:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-false86b71a17-de72-400c-80ca-001a0b5e0ff3]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-19 13:12:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.4 PodIP: PodIPs:[] StartTime:2019-11-19 13:12:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:bin-false86b71a17-de72-400c-80ca-001a0b5e0ff3 State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:docker.io/library/busybox:1.29 ImageID: ContainerID: Started:0xc0023d255a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
to be nil
test/e2e/common/kubelet.go:123
				
				Click to see stdout/stderrfrom junit_15.xml

Find status mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate] 2m35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\ssupport\spod\sreadiness\sgates\s\[NodeFeature\:PodReadinessGate\]$'
test/e2e/common/pods.go:777
Nov 19 13:26:18.399: Unexpected error:
    <*errors.errorString | 0xc0000a10a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/common/pods.go:811
				
				Click to see stdout/stderrfrom junit_19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate] 3m23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\ssupport\spod\sreadiness\sgates\s\[NodeFeature\:PodReadinessGate\]$'
test/e2e/common/pods.go:777
Nov 19 13:29:01.399: Unexpected error:
    <*errors.errorString | 0xc0000a10a0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/common/pods.go:811
				
				Click to see stdout/stderrfrom junit_19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-scheduling] PreemptionExecutionPath runs ReplicaSets to verify preemption running path 2m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sPreemptionExecutionPath\sruns\sReplicaSets\sto\sverify\spreemption\srunning\spath$'
test/e2e/scheduling/preemption.go:345
Nov 19 13:10:21.166: Unexpected error:
    <*errors.errorString | 0xc0024860c0>: {
        s: "replicaset \"rs-pod1\" never had desired number of .status.availableReplicas",
    }
    replicaset "rs-pod1" never had desired number of .status.availableReplicas
occurred
test/e2e/scheduling/preemption.go:510
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-scheduling] PreemptionExecutionPath runs ReplicaSets to verify preemption running path 4m38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sPreemptionExecutionPath\sruns\sReplicaSets\sto\sverify\spreemption\srunning\spath$'
test/e2e/scheduling/preemption.go:345
Nov 19 13:15:43.809: Unexpected error:
    <*errors.errorString | 0xc002b452d0>: {
        s: "replicaset \"rs-pod4\" never had desired number of .status.availableReplicas",
    }
    replicaset "rs-pod4" never had desired number of .status.availableReplicas
occurred
test/e2e/scheduling/preemption.go:510
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount 5m49s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sreadOnly\sdirectory\sspecified\sin\sthe\svolumeMount$'
test/e2e/storage/testsuites/subpath.go:347
Nov 19 13:05:05.273: Unexpected error:
    <*errors.errorString | 0xc002094570>: {
        s: "PersistentVolumeClaims [csi-hostpathchm74] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [csi-hostpathchm74] not all in phase Bound within 5m0s
occurred
test/e2e/storage/testsuites/base.go:366
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume 16m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sinline\sephemeral\sCSI\svolume\]\sephemeral\sshould\screate\sread\-only\sinline\sephemeral\svolume$'
test/e2e/storage/testsuites/ephemeral.go:116
Nov 19 13:26:38.518: waiting for pod with inline volume
Unexpected error:
    <*errors.errorString | 0xc0000d5090>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
test/e2e/storage/testsuites/ephemeral.go:259
				
				Click to see stdout/stderrfrom junit_21.xml

Find with mentions in log files | View test history on testgrid


Test 38m36s

error during ./hack/ginkgo-e2e.sh --ginkgo.flakeAttempts=2 --num-nodes=2 --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|Network\sshould\sset\sTCP\sCLOSE_WAIT\stimeout|Mount\spropagation\sshould\spropagate\smounts\sto\sthe\shost|PodSecurityPolicy|PVC\sProtection\sVerify|should\sprovide\sbasic\sidentity|should\sadopt\smatching\sorphans\sand\srelease|should\snot\sdeadlock\swhen\sa\spod's\spredecessor\sfails|should\sperform\srolling\supdates\sand\sroll\sbacks\sof\stemplate\smodifications\swith\sPVCs|should\sperform\srolling\supdates\sand\sroll\sbacks\sof\stemplate\smodifications|Services\sshould\sbe\sable\sto\screate\sa\sfunctioning\sNodePort\sservice$|volumeMode\sshould\snot\smount\s/\smap\sunused\svolumes\sin\sa\spod --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 634 Passed Tests

Show 4107 Skipped Tests