This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 14 failed / 196 succeeded
Started2020-04-01 08:08
Elapsed1h29m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/687827f9-a5c5-49af-8837-48f164fe90d3/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/687827f9-a5c5-49af-8837-48f164fe90d3/targets/test
job-versionv1.15.12-beta.0.9+8de4013f5815f7
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
revisionv1.15.12-beta.0.9+8de4013f5815f7

Test Failures


1h6m

error during /home/prow/go/src/k8s.io/windows-testing/gce/run-e2e.sh --ginkgo.focus=\[Conformance\]|\[NodeConformance\]|\[sig-windows\] --ginkgo.skip=\[LinuxOnly\]|\[Serial\]|\[Feature:.+\] --minStartupPods=8 --node-os-distro=windows: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] 13m36s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\sprestop\sexec\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr  1 09:20:52.429: Couldn't delete ns: "container-lifecycle-hook-8640": namespace container-lifecycle-hook-8640 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace container-lifecycle-hook-8640 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] 12m26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\sprestop\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr  1 09:22:59.325: Couldn't delete ns: "container-lifecycle-hook-7232": namespace container-lifecycle-hook-7232 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace container-lifecycle-hook-7232 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] 28m12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\scap\sback\-off\sat\sMaxContainerBackOff\s\[Slow\]\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:720
Apr  1 09:18:48.891: timed out waiting for container restart in pod=back-off-cap/back-off-cap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:751
				
				Click to see stdout/stderrfrom junit_03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Deployment deployment should support proportional scaling [Conformance] 5m38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDeployment\sdeployment\sshould\ssupport\sproportional\sscaling\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc0019ae770>: {
        s: "error waiting for deployment \"nginx-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721327712, loc:(*time.Location)(0x7eb3a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721327712, loc:(*time.Location)(0x7eb3a20)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721327925, loc:(*time.Location)(0x7eb3a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721327712, loc:(*time.Location)(0x7eb3a20)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"nginx-deployment-68b476495c\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "nginx-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:10, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721327712, loc:(*time.Location)(0x7eb3a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721327712, loc:(*time.Location)(0x7eb3a20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721327925, loc:(*time.Location)(0x7eb3a20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721327712, loc:(*time.Location)(0x7eb3a20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"nginx-deployment-68b476495c\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:874
				
				Click to see stdout/stderrfrom junit_08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Deployment deployment should support rollover [Conformance] 1m17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDeployment\sdeployment\sshould\ssupport\srollover\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc0017a4c20>: {
        s: "replicaset \"test-rollover-controller\" never became ready",
    }
    replicaset "test-rollover-controller" never became ready
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:435
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] 14m23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sBurst\sscaling\sshould\srun\sto\scompletion\seven\swith\sunhealthy\spods\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr  1 09:08:33.312: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:322
				
				Click to see stdout/stderrfrom junit_08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] 11m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sScaling\sshould\shappen\sin\spredictable\sorder\sand\shalt\sif\sany\sstateful\spod\sis\sunhealthy\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr  1 08:48:56.690: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:322
				
				Click to see stdout/stderrfrom junit_04.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] 11m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\scanary\supdates\sand\sphased\srolling\supdates\sof\stemplate\smodifications\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr  1 09:09:10.641: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:322
				
				Click to see stdout/stderrfrom junit_02.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] 11m31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\srolling\supdates\sand\sroll\sbacks\sof\stemplate\smodifications\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr  1 08:52:40.460: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:322
				
				Click to see stdout/stderrfrom junit_03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] 6m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sKubectl\srolling\-update\sshould\ssupport\srolling\-update\sto\ssame\simage\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://34.82.60.254 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=e2eteam/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5826] []  <nil> Created e2e-test-nginx-rc-b129c628c5190b84bcfbd823fef99f9d\nScaling up e2e-test-nginx-rc-b129c628c5190b84bcfbd823fef99f9d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b129c628c5190b84bcfbd823fef99f9d up to 1\n Command \"rolling-update\" is deprecated, use \"rollout\" instead\nerror: timed out waiting for any update progress to be made\n [] <nil> 0xc002634000 exit status 1 <nil> <nil> true [0xc00115cc60 0xc00115cc78 0xc00115cc90] [0xc00115cc60 0xc00115cc78 0xc00115cc90] [0xc00115cc70 0xc00115cc88] [0xba7080 0xba7080] 0xc002628c60 <nil>}:\nCommand stdout:\nCreated e2e-test-nginx-rc-b129c628c5190b84bcfbd823fef99f9d\nScaling up e2e-test-nginx-rc-b129c628c5190b84bcfbd823fef99f9d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b129c628c5190b84bcfbd823fef99f9d up to 1\n\nstderr:\nCommand \"rolling-update\" is deprecated, use \"rollout\" instead\nerror: timed out waiting for any update progress to be made\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running &{/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://34.82.60.254 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=e2eteam/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5826] []  <nil> Created e2e-test-nginx-rc-b129c628c5190b84bcfbd823fef99f9d
    Scaling up e2e-test-nginx-rc-b129c628c5190b84bcfbd823fef99f9d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-b129c628c5190b84bcfbd823fef99f9d up to 1
     Command "rolling-update" is deprecated, use "rollout" instead
    error: timed out waiting for any update progress to be made
     [] <nil> 0xc002634000 exit status 1 <nil> <nil> true [0xc00115cc60 0xc00115cc78 0xc00115cc90] [0xc00115cc60 0xc00115cc78 0xc00115cc90] [0xc00115cc70 0xc00115cc88] [0xba7080 0xba7080] 0xc002628c60 <nil>}:
    Command stdout:
    Created e2e-test-nginx-rc-b129c628c5190b84bcfbd823fef99f9d
    Scaling up e2e-test-nginx-rc-b129c628c5190b84bcfbd823fef99f9d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling e2e-test-nginx-rc-b129c628c5190b84bcfbd823fef99f9d up to 1
    
    stderr:
    Command "rolling-update" is deprecated, use "rollout" instead
    error: timed out waiting for any update progress to be made
    
    error:
    exit status 1
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:149
				
				Click to see stdout/stderrfrom junit_05.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for node-pod communication: udp 3m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\s\[sig\-windows\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\snode\-pod\scommunication\:\sudp$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/networking.go:87
Apr  1 08:49:36.154: Failed to find expected endpoints:
Tries 39
Command echo hostName | nc -w 1 -u 10.64.1.28 8081
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:348
				
				Click to see stdout/stderrfrom junit_06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow] 3m59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sgcepd\]\s\[Testpattern\:\sDynamic\sPV\s\(ntfs\)\]\[sig\-windows\]\ssubPath\sshould\ssupport\srestarting\scontainers\susing\sdirectory\sas\ssubpath\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:296
while waiting for container to stabilize
Unexpected error:
    <*errors.errorString | 0xc0002b18b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:867
				
				Click to see stdout/stderrfrom junit_06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] 26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sProjected\sdownwardAPI\sshould\sprovide\scontainer\'s\smemory\slimit\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc001c28af0>: {
        s: "expected pod \"downwardapi-volume-7e091094-e52b-4072-a798-c64e32977e92\" success: pod \"downwardapi-volume-7e091094-e52b-4072-a798-c64e32977e92\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 08:43:52 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 08:44:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 08:44:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 08:43:52 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.40.0.5 PodIP:10.64.3.16 StartTime:2020-04-01 08:43:52 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2020-04-01 08:43:56 +0000 UTC,FinishedAt:2020-04-01 08:43:58 +0000 UTC,ContainerID:docker://5d60e861a720c7d7e296ef886f65db14495f02cf85e0192bdc470b5b47c4af5d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:e2eteam/mounttest:1.0 ImageID:docker-pullable://e2eteam/mounttest@sha256:1d6eaf26a98b5324496fe5a43116742417c46d8bf30100e214f9ff27e56460b2 ContainerID:docker://5d60e861a720c7d7e296ef886f65db14495f02cf85e0192bdc470b5b47c4af5d}] QOSClass:Burstable}",
    }
    expected pod "downwardapi-volume-7e091094-e52b-4072-a798-c64e32977e92" success: pod "downwardapi-volume-7e091094-e52b-4072-a798-c64e32977e92" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 08:43:52 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 08:44:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 08:44:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [client-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 08:43:52 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.40.0.5 PodIP:10.64.3.16 StartTime:2020-04-01 08:43:52 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:client-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2020-04-01 08:43:56 +0000 UTC,FinishedAt:2020-04-01 08:43:58 +0000 UTC,ContainerID:docker://5d60e861a720c7d7e296ef886f65db14495f02cf85e0192bdc470b5b47c4af5d,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:e2eteam/mounttest:1.0 ImageID:docker-pullable://e2eteam/mounttest@sha256:1d6eaf26a98b5324496fe5a43116742417c46d8bf30100e214f9ff27e56460b2 ContainerID:docker://5d60e861a720c7d7e296ef886f65db14495f02cf85e0192bdc470b5b47c4af5d}] QOSClass:Burstable}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2342
				
				Click to see stdout/stderrfrom junit_02.xml

Find downwardapi-volume-7e091094-e52b-4072-a798-c64e32977e92 mentions in log files | View test history on testgrid


Show 196 Passed Tests