This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 30 failed / 736 succeeded
Started2019-07-20 05:45
Elapsed1h3m
Revision
Buildergke-prow-ssd-pool-1a225945-8w66
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/5302e4fc-0e08-43e5-b4ad-0dc41dd6faad/targets/test'}}
pod65d164f0-aab1-11e9-b82b-365474bd0c86
resultstorehttps://source.cloud.google.com/results/invocations/5302e4fc-0e08-43e5-b4ad-0dc41dd6faad/targets/test
infra-commita7f2c5488
job-versionv1.14.5-beta.0.1+7936da50c68f42
master_os_image
node_os_imagecos-u-73-11647-239-0
pod65d164f0-aab1-11e9-b82b-365474bd0c86
revisionv1.14.5-beta.0.1+7936da50c68f42

Test Failures


Test 43m3s

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --num-nodes=3 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files


[k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] 15m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Probing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\s\/healthz\shttp\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Jul 20 06:42:10.330: Couldn't delete ns: "container-probe-6209": namespace container-probe-6209 was not deleted with limit: timed out waiting for the condition, namespaced content other than pods remain (&errors.errorString{s:"namespace container-probe-6209 was not deleted with limit: timed out waiting for the condition, namespaced content other than pods remain"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
				
				Click to see stdout/stderrfrom junit_21.xml

Filter through log files


[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods 11m50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sadopt\smatching\sorphans\sand\srelease\snon\-matching\spods$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:134
wait for pod "ss-0" to be readopted
Expected success, but got an error:
    <*errors.errorString | 0xc000305200>: {
        s: "Gave up after waiting 10m0s for pod \"ss-0\" to be \"adopted\"",
    }
    Gave up after waiting 10m0s for pod "ss-0" to be "adopted"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:177
				
				Click to see stdout/stderrfrom junit_10.xml

Find ss-0 mentions in log files


[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails 9m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-apps\]\sStatefulSet\s\[k8s\.io\]\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\snot\sdeadlock\swhen\sa\spod\'s\spredecessor\sfails$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:219
Unexpected error:
    <*url.Error | 0xc001e8a810>: {
        Op: "Get",
        URL: "https://35.230.58.114/api/v1/namespaces/statefulset-8622/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar",
        Err: {LastStreamID: 5597, ErrCode: 0, DebugData: ""},
    }
    Get https://35.230.58.114/api/v1/namespaces/statefulset-8622/pods?labelSelector=baz%3Dblah%2Cfoo%3Dbar: http2: server sent GOAWAY and closed the connection; LastStreamID=5597, ErrCode=NO_ERROR, debug=""
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:268
				
				Click to see stdout/stderrfrom junit_28.xml

Filter through log files


[sig-cli] Kubectl client [k8s.io] Simple pod should support inline execution and attach 56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-cli\]\sKubectl\sclient\s\[k8s\.io\]\sSimple\spod\sshould\ssupport\sinline\sexecution\sand\sattach$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:513
Jul 20 06:18:57.105: Pod "run-test-zrhfk" of Job "run-test" should still be running
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:528
				
				Click to see stdout/stderrfrom junit_03.xml

Filter through log files


[sig-network] Services should be able to up and down services 12m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-network\]\sServices\sshould\sbe\sable\sto\sup\sand\sdown\sservices$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Jul 20 06:38:41.011: Couldn't delete ns: "services-4077": Get https://35.230.58.114/api?timeout=32s: dial tcp 35.230.58.114:443: connect: connection refused (&url.Error{Op:"Get", URL:"https://35.230.58.114/api?timeout=32s", Err:(*net.OpError)(0xc0002ff4a0)})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
				
				Click to see stdout/stderrfrom junit_17.xml

Filter through log files


[sig-scheduling] PreemptionExecutionPath runs ReplicaSets to verify preemption running path 14m26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-scheduling\]\sPreemptionExecutionPath\sruns\sReplicaSets\sto\sverify\spreemption\srunning\spath$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:453
Unexpected error:
    <*errors.errorString | 0xc002068ca0>: {
        s: "replicaset \"rs-pod4\" never had desired number of .status.availableReplicas",
    }
    replicaset "rs-pod4" never had desired number of .status.availableReplicas
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:618
				
				Click to see stdout/stderrfrom junit_18.xml

Filter through log files


[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory 11m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sexisting\sdirectory$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Jul 20 06:38:57.278: Couldn't delete ns: "provisioning-9010": Get https://35.230.58.114/api?timeout=32s: dial tcp 35.230.58.114:443: connect: connection refused (&url.Error{Op:"Get", URL:"https://35.230.58.114/api?timeout=32s", Err:(*net.OpError)(0xc001c6cbe0)})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
				
				Click to see stdout/stderrfrom junit_13.xml

Filter through log files


[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil 13m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\smock\svolume\sCSI\sworkload\sinformation\susing\smock\sdriver\sshould\snot\sbe\spassed\swhen\spodInfoOnMount\=nil$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:327
Failed to get CSIDriver : gave up after waiting 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5223".
Unexpected error:
    <*errors.errorString | 0xc001e7c530>: {
        s: "gave up after waiting 4m0s for CSIDriver \"csi-mock-csi-mock-volumes-5223\".",
    }
    gave up after waiting 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5223".
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:127
				
				Click to see stdout/stderrfrom junit_22.xml

Filter through log files


[sig-storage] Dynamic Provisioning [k8s.io] GlusterDynamicProvisioner should create and delete persistent volumes [fast] 6m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sDynamic\sProvisioning\s\[k8s\.io\]\sGlusterDynamicProvisioner\sshould\screate\sand\sdelete\spersistent\svolumes\s\[fast\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:820
Unexpected error:
    <*errors.errorString | 0xc002072550>: {
        s: "PersistentVolumeClaims [pvc-plv28] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [pvc-plv28] not all in phase Bound within 5m0s
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:314
				
				Click to see stdout/stderrfrom junit_04.xml

Filter through log files


[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs) 10m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sEmptyDir\svolumes\swhen\sFSGroup\sis\sspecified\s\[NodeFeature\:FSGroup\]\sfiles\swith\sFSGroup\sownership\sshould\ssupport\s\(root\,0644\,tmpfs\)$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Jul 20 06:39:01.899: Couldn't delete ns: "emptydir-5598": Get https://35.230.58.114/api?timeout=32s: dial tcp 35.230.58.114:443: connect: connection refused (&url.Error{Op:"Get", URL:"https://35.230.58.114/api?timeout=32s", Err:(*net.OpError)(0xc0001ad220)})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
				
				Click to see stdout/stderrfrom junit_08.xml

Filter through log files


[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root 12m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sEmptyDir\svolumes\swhen\sFSGroup\sis\sspecified\s\[NodeFeature\:FSGroup\]\snew\sfiles\sshould\sbe\screated\swith\sFSGroup\sownership\swhen\scontainer\sis\snon\-root$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
Unexpected error:
    <*errors.errorString | 0xc000279400>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:220
				
				Click to see stdout/stderrfrom junit_14.xml

Filter through log files


[sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (ext4)] volumes should be mountable 13m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sgcepd\]\s\[Testpattern\:\sDynamic\sPV\s\(ext4\)\]\svolumes\sshould\sbe\smountable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Jul 20 06:40:46.350: Couldn't delete ns: "volume-9597": Get https://35.230.58.114/api?timeout=32s: dial tcp 35.230.58.114:443: i/o timeout (&url.Error{Op:"Get", URL:"https://35.230.58.114/api?timeout=32s", Err:(*net.OpError)(0xc002780000)})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
				
				Click to see stdout/stderrfrom junit_16.xml

Filter through log files


[sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (xfs)] volumes should allow exec of files on the volume 5m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sgcepd\]\s\[Testpattern\:\sDynamic\sPV\s\(xfs\)\]\svolumes\sshould\sallow\sexec\sof\sfiles\son\sthe\svolume$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:167
Unexpected error:
    <*errors.errorString | 0xc0020fd7f0>: {
        s: "expected pod \"exec-volume-test-gcepd-dynamicpv-gnrg\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-gcepd-dynamicpv-gnrg\" to be \"success or failure\"",
    }
    expected pod "exec-volume-test-gcepd-dynamicpv-gnrg" success: Gave up after waiting 5m0s for pod "exec-volume-test-gcepd-dynamicpv-gnrg" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2432
				
				Click to see stdout/stderrfrom junit_08.xml

Find exec-volume-test-gcepd-dynamicpv-gnrg mentions in log files


[sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (xfs)] volumes should be mountable 6m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\sgcepd\]\s\[Testpattern\:\sDynamic\sPV\s\(xfs\)\]\svolumes\sshould\sbe\smountable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:136
Unexpected error:
    <*errors.errorString | 0xc002b9e830>: {
        s: "Gave up after waiting 5m0s for pod \"gcepd-injector-g4nb\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "gcepd-injector-g4nb" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume_util.go:545