This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 32 failed / 64 succeeded
Started2020-02-13 01:49
Elapsed10h30m
Revision
Buildergke-prow-default-pool-cf4891d4-nq59
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/a972d0d5-6d42-40f0-9608-90cee574bdb6/targets/test'}}
pode9081d82-4e02-11ea-8bf0-4660bf95a9d5
resultstorehttps://source.cloud.google.com/results/invocations/a972d0d5-6d42-40f0-9608-90cee574bdb6/targets/test
infra-commitce5ebb76e
job-versionv1.15.11-beta.0.1+3b43c8064a328d-dirty
pode9081d82-4e02-11ea-8bf0-4660bf95a9d5
repok8s.io/kubernetes
repo-commit3b43c8064a328d5834d0e8e89cb78fed3febc5e5
repos{u'k8s.io/kubernetes': u'release-1.15'}
revisionv1.15.11-beta.0.1+3b43c8064a328d-dirty

Test Failures


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] 18m53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\sprestop\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
wait for pod "pod-with-prestop-http-hook" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:178
				
				Click to see stdout/stderrfrom junit_01.xml

Find pod-with-prestop-http-hook mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] 6m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\shave\san\sterminated\sreason\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Timed out after 60.000s.
Expected
    <*errors.errorString | 0xc0027a3e40>: {
        s: "expected state to be terminated. Got pod status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 04:24:19 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 04:24:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falseef3619ec-bc1a-407f-9d56-ea74254f0d2c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 04:24:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falseef3619ec-bc1a-407f-9d56-ea74254f0d2c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-13 04:24:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.240.0.4 PodIP: StartTime:2020-02-13 04:24:19 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:bin-falseef3619ec-bc1a-407f-9d56-ea74254f0d2c State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:e2eteam/busybox:1.29 ImageID: ContainerID:}] QOSClass:BestEffort}",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:123
				
				Click to see stdout/stderrfrom junit_01.xml

Find status mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] 17m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\scontain\senvironment\svariables\sfor\sservices\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 12:08:52.968: Couldn't delete ns: "pods-7624": namespace pods-7624 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace pods-7624 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] 19m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sshould\snot\sbe\sready\sbefore\sinitial\sdelay\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc0003c7df0>: {
        s: "want pod 'test-webserver-b0e5cc2e-e7b2-4d67-b9ad-b908a2ece37b' on '2837k8s000' to be 'Running' but was 'Pending'",
    }
    want pod 'test-webserver-b0e5cc2e-e7b2-4d67-b9ad-b908a2ece37b' on '2837k8s000' to be 'Running' but was 'Pending'
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:68
				
				Click to see stdout/stderrfrom junit_01.xml

Find test-webserver-b0e5cc2e-e7b2-4d67-b9ad-b908a2ece37b mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class. 3m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sResourceQuota\s\[Feature\:PodPriority\]\sshould\sverify\sResourceQuota\'s\spriority\sclass\sscope\s\(quota\sset\sto\spod\scount\:\s1\)\sagainst\sa\spod\swith\ssame\spriority\sclass\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 10:58:34.896: All nodes should be ready after test, Not ready nodes: ", 2837k8s000"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] CronJob should schedule multiple jobs concurrently 4m47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sCronJob\sshould\sschedule\smultiple\sjobs\sconcurrently$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 09:12:01.720: All nodes should be ready after test, Not ready nodes: ", 2837k8s001"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] 3m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sKubectl\srun\sdefault\sshould\screate\san\src\sor\sdeployment\sfrom\san\simage\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 03:12:44.466: All nodes should be ready after test, Not ready nodes: ", 2837k8s000"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:392
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] DNS should provide DNS for ExternalName services [Conformance] 9m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sDNS\sshould\sprovide\sDNS\sfor\sExternalName\sservices\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:587
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for intra-pod communication: udp 17m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\s\[sig\-windows\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\sudp$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/networking.go:61
Unexpected error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:659
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] 9m21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sConfigMap\sshould\sbe\sconsumable\svia\sthe\senvironment\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc002d5fee0>: {
        s: "expected pod \"pod-configmaps-ceb2ab1d-b920-409f-88b1-760ab34c5014\" success: Gave up after waiting 5m0s for pod \"pod-configmaps-ceb2ab1d-b920-409f-88b1-760ab34c5014\" to be \"success or failure\"",
    }
    expected pod "pod-configmaps-ceb2ab1d-b920-409f-88b1-760ab34c5014" success: Gave up after waiting 5m0s for pod "pod-configmaps-ceb2ab1d-b920-409f-88b1-760ab34c5014" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2342
				
				Click to see stdout/stderrfrom junit_01.xml

Find pod-configmaps-ceb2ab1d-b920-409f-88b1-760ab34c5014 mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] 9m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sDownward\sAPI\sshould\sprovide\spod\sname\,\snamespace\sand\sIP\saddress\sas\senv\svars\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc002b51f60>: {
        s: "expected pod \"downward-api-6fa95a5c-59a1-4998-989e-c507ebfab7fd\" success: Gave up after waiting 5m0s for pod \"downward-api-6fa95a5c-59a1-4998-989e-c507ebfab7fd\" to be \"success or failure\"",
    }
    expected pod "downward-api-6fa95a5c-59a1-4998-989e-c507ebfab7fd" success: Gave up after waiting 5m0s for pod "downward-api-6fa95a5c-59a1-4998-989e-c507ebfab7fd" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2342
				
				Click to see stdout/stderrfrom junit_01.xml

Find downward-api-6fa95a5c-59a1-4998-989e-c507ebfab7fd mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates pod anti-affinity works in preemption 13m50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sSchedulerPreemption\s\[Serial\]\svalidates\spod\santi\-affinity\sworks\sin\spreemption$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 13 05:57:22.673: Couldn't delete ns: "sched-preemption-5570": namespace sched-preemption-5570 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace sched-preemption-5570 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] 20m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sConfigMap\sbinary\sdata\sshould\sbe\sreflected\sin\svolume\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] 19m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sConfigMap\supdates\sshould\sbe\sreflected\sin\svolume\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] 20m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sDownward\sAPI\svolume\sshould\sprovide\scontainer\'s\scpu\srequest\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
wait for pod "downwardapi-volume-a3253a54-30ce-4d50-a925-fa0e8d107cf6" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:178
				
				Click to see stdout/stderrfrom junit_01.xml

Find downwardapi-volume-a3253a54-30ce-4d50-a925-fa0e8d107cf6 mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] 25m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sDownward\sAPI\svolume\sshould\sprovide\scontainer\'s\smemory\slimit\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
wait for pod "downwardapi-volume-8cdd861f-cfd1-4685-9539-8889e2c45e41" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:178
				
				Click to see stdout/stderrfrom junit_01.xml

Find downwardapi-volume-8cdd861f-cfd1-4685-9539-8889e2c45e41 mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] 25m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sDownward\sAPI\svolume\sshould\sprovide\snode\sallocatable\s\(memory\)\sas\sdefault\smemory\slimit\sif\sthe\slimit\sis\snot\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
wait for pod "downwardapi-volume-6c33b0de-3f85-4aec-b3ed-a12154af1aec" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:178
				
				Click to see stdout/stderrfrom junit_01.xml

Find downwardapi-volume-6c33b0de-3f85-4aec-b3ed-a12154af1aec mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] 19m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sDownward\sAPI\svolume\sshould\supdate\slabels\son\smodification\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] 17m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sEmptyDir\swrapper\svolumes\sshould\snot\sconflict\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] HostPath should support r/w [NodeConformance] 22m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sHostPath\sshould\ssupport\sr\/w\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65
wait for pod "pod-host-path-test" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:178
				
				Click to see stdout/stderrfrom junit_01.xml

Find pod-host-path-test mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] 20m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sProjected\sconfigMap\sshould\sbe\sconsumable\sin\smultiple\svolumes\sin\sthe\ssame\spod\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
wait for pod "pod-projected-configmaps-232698a2-a53b-4c8b-b9e8-83f1fb6f488e" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:178
				
				Click to see stdout/stderrfrom junit_01.xml

Find pod-projected-configmaps-232698a2-a53b-4c8b-b9e8-83f1fb6f488e mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] 20m6s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sProjected\sdownwardAPI\sshould\sprovide\scontainer\'s\smemory\srequest\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
wait for pod "downwardapi-volume-fec6375a-d724-4a2a-b595-3d482afd3176" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:178
				
				Click to see stdout/stderrfrom junit_01.xml

Find downwardapi-volume-fec6375a-d724-4a2a-b595-3d482afd3176 mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] 24m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sProjected\sdownwardAPI\sshould\sprovide\snode\sallocatable\s\(cpu\)\sas\sdefault\scpu\slimit\sif\sthe\slimit\sis\snot\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
wait for pod "downwardapi-volume-ea27af78-7eca-4dda-b188-cfad9b3baf0d" to disappear
Expected success, but got an error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:178