This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 18 failed / 130 succeeded
Started2019-05-16 07:54
Elapsed8h50m
Revision
Buildergke-prow-containerd-pool-99179761-zd63
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/c150c07a-f524-4382-8f21-b22cdfb3d17e/targets/test'}}
podbb6c2016-77af-11e9-a5e8-0a580a6c0c11
resultstorehttps://source.cloud.google.com/results/invocations/c150c07a-f524-4382-8f21-b22cdfb3d17e/targets/test
infra-commit822b6386f
job-versionv1.13.7-beta.0.4+de38c05974fd21
master_os_image
node_os_imageubuntu-gke-1804-d1703-0-v20190514
podbb6c2016-77af-11e9-a5e8-0a580a6c0c11
revisionv1.13.7-beta.0.4+de38c05974fd21

Test Failures


Test 8h34m

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --num-nodes=3 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


[k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable 3m15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Downward\sAPI\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:EphemeralStorage\]\sDownward\sAPI\stests\sfor\slocal\sephemeral\sstorage\sshould\sprovide\sdefault\slimits\.ephemeral\-storage\sfrom\snode\sallocatable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
May 16 15:09:42.617: All nodes should be ready after test, Not ready nodes: ", gke-test-7671b0fd74-default-pool-9658d027-zs98"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:402
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] 3m25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-apps\]\sDaemon\sset\s\[Serial\]\sshould\sretry\screating\sfailed\sdaemon\spods\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
May 16 15:37:26.707: All nodes should be ready after test, Not ready nodes: ", gke-test-7671b0fd74-default-pool-9658d027-zs98"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:402
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] 11m19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-apps\]\sNetwork\sPartition\s\[Disruptive\]\s\[Slow\]\s\[k8s\.io\]\s\[StatefulSet\]\sshould\snot\sreschedule\sstateful\spods\sif\sthere\sis\sa\snetwork\spartition\s\[Slow\]\s\[Disruptive\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:392
Pod was not deleted during network partition.
Expected
    <*errors.StatusError | 0xc000d1f3b0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
            Status: "Failure",
            Message: "Unauthorized",
            Reason: "Unauthorized",
            Details: nil,
            Code: 401,
        },
    }
to equal
    <*errors.errorString | 0xc0000d1860>: {
        s: "timed out waiting for the condition",
    }
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:410
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 6m22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sDeployment\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
May 16 16:12:29.367: All nodes should be ready after test, Not ready nodes: ", gke-test-7671b0fd74-default-pool-9658d027-zs98"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:402
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 6m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-autoscaling\]\s\[HPA\]\sHorizontal\spod\sautoscaling\s\(scale\sresource\:\sCPU\)\s\[sig\-autoscaling\]\s\[Serial\]\s\[Slow\]\sReplicaSet\sShould\sscale\sfrom\s1\spod\sto\s3\spods\sand\sfrom\s3\sto\s5$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
May 16 16:18:57.286: All nodes should be ready after test, Not ready nodes: ", gke-test-7671b0fd74-default-pool-9658d027-zs98"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:402
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-network] DNS configMap nameserver [IPv4] Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial] 4m21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-network\]\sDNS\sconfigMap\snameserver\s\[IPv4\]\sForward\sexternal\sname\slookup\sshould\sforward\sexternalname\slookup\sto\supstream\snameserver\s\[Slow\]\[Serial\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
May 16 15:52:39.066: All nodes should be ready after test, Not ready nodes: ", gke-test-7671b0fd74-default-pool-9658d027-zs98"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:402
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds 4m50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-scheduling\]\sNoExecuteTaintManager\sMultiple\sPods\s\[Serial\]\sevicts\spods\swith\sminTolerationSeconds$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
May 16 15:17:46.517: All nodes should be ready after test, Not ready nodes: ", gke-test-7671b0fd74-default-pool-9658d027-zs98"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:402
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-scheduling] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes 5m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-scheduling\]\sNoExecuteTaintManager\sSingle\sPod\s\[Serial\]\sdoesn\'t\sevict\spod\swith\stolerations\sfrom\stainted\snodes$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
May 16 16:27:31.920: All nodes should be ready after test, Not ready nodes: ", gke-test-7671b0fd74-default-pool-9658d027-zs98"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:402
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-scheduling] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] 10m56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-scheduling\]\sSchedulerPredicates\s\[Serial\]\svalidates\sMaxPods\slimit\snumber\sof\spods\sthat\sare\sallowed\sto\srun\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:102
Expected error:
    <*errors.errorString | 0xc0000d1860>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:714
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-scheduling] TaintBasedEvictions [Serial] Checks that the node becomes unreachable 1m51s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-scheduling\]\sTaintBasedEvictions\s\[Serial\]\sChecks\sthat\sthe\snode\sbecomes\sunreachable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taint_based_evictions.go:75
Expected error:
    <*errors.errorString | 0xc0018b6fa0>: {
        s: "expect node gke-test-7671b0fd74-default-pool-9658d027-d52m to have taint = true within 30s: timed out waiting for the condition",
    }
    expect node gke-test-7671b0fd74-default-pool-9658d027-d52m to have taint = true within 30s: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/taint_based_evictions.go:162
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-storage] CSI Volumes [Driver: csi-hostpath-v0] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow] 5m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\-v0\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\sunmount\sif\spod\sis\sgracefully\sdeleted\swhile\skubelet\sis\sdown\s\[Disruptive\]\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
May 16 15:57:57.974: All nodes should be ready after test, Not ready nodes: ", gke-test-7671b0fd74-default-pool-9658d027-zs98"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:402
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow] 5m25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\srestarting\scontainers\susing\sdirectory\sas\ssubpath\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
May 16 16:33:15.032: All nodes should be ready after test, Not ready nodes: ", gke-test-7671b0fd74-default-pool-9658d027-zs98"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:402
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should be mountable 4m30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\svolumes\sshould\sbe\smountable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
May 16 16:37:41.494: All nodes should be ready after test, Not ready nodes: ", gke-test-7671b0fd74-default-pool-9658d027-zs98"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:402
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should be mountable 4m31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(ext4\)\]\svolumes\sshould\sbe\smountable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
May 16 15:25:30.539: All nodes should be ready after test, Not ready nodes: ", gke-test-7671b0fd74-default-pool-9658d027-zs98"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:402
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns. 4m55s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sGenericPersistentVolume\[Disruptive\]\sWhen\skubelet\srestarts\sShould\stest\sthat\sa\svolume\smounted\sto\sa\spod\sthat\sis\sforce\sdeleted\swhile\sthe\skubelet\sis\sdown\sunmounts\swhen\sthe\skubelet\sreturns\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
May 16 16:06:06.987: All nodes should be ready after test, Not ready nodes: ", gke-test-7671b0fd74-default-pool-9658d027-zs98"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:402