This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 4 failed / 176 succeeded
Started2019-07-19 04:58
Elapsed6h54m
Revision
Buildergke-prow-ssd-pool-1a225945-hwdp
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/be40f6a1-21be-4138-986e-fe8e31d444e7/targets/test'}}
podad4bee9b-a9e1-11e9-a872-ea55029f82d3
resultstorehttps://source.cloud.google.com/results/invocations/be40f6a1-21be-4138-986e-fe8e31d444e7/targets/test
infra-commit6d1f00ed0
job-versionv1.15.2-beta.0.1+92b2e906d7aa61
master_os_imagecos-73-11647-163-0
node_os_imagecos-69-10895-299-0
podad4bee9b-a9e1-11e9-a872-ea55029f82d3
revisionv1.15.2-beta.0.1+92b2e906d7aa61

Test Failures


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)] volumes should allow exec of files on the volume 5m34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(xfs\)\]\svolumes\sshould\sallow\sexec\sof\sfiles\son\sthe\svolume$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:173
Unexpected error:
    <*errors.errorString | 0xc0022c4770>: {
        s: "expected pod \"exec-volume-test-pd-csi-storage-gke-io-dynamicpv-5js6\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-pd-csi-storage-gke-io-dynamicpv-5js6\" to be \"success or failure\"",
    }
    expected pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-5js6" success: Gave up after waiting 5m0s for pod "exec-volume-test-pd-csi-storage-gke-io-dynamicpv-5js6" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2342
				
				Click to see stdout/stderrfrom junit_01.xml

Find exec-volume-test-pd-csi-storage-gke-io-dynamicpv-5js6 mentions in log files


Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)] volumes should be mountable 5m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Serial\]\s\[Testpattern\:\sDynamic\sPV\s\(xfs\)\]\svolumes\sshould\sbe\smountable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:142
Unexpected error:
    <*errors.errorString | 0xc00426d500>: {
        s: "Gave up after waiting 5m0s for pod \"gcepd-injector-kn5h\" to be \"success or failure\"",
    }
    Gave up after waiting 5m0s for pod "gcepd-injector-kn5h" to be "success or failure"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:570
				
				Click to see stdout/stderrfrom junit_01.xml

Find gcepd-injector-kn5h mentions in log files


Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly] 13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sblock\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\sunmount\sif\spod\sis\sforce\sdeleted\swhile\skubelet\sis\sdown\s\[Disruptive\]\[Slow\]\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:328
Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.230.28.255 --kubeconfig=/workspace/.kube/config exec --namespace=provisioning-2332 hostexec-test-c546202c9f-minion-group-zfw6 -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(sudo losetup | grep /tmp/local-driver-5e8c93a0-cd65-4fe9-8a37-5a49b71f4471/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] []  <nil> rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused \"process_linux.go:87: adding pid 31237 to cgroups caused \\\"failed to write 31237 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod7a88a2e2-bec9-4421-9898-a013c1918591/1dbc7153c163fa73464a6951c91e5e354efee5fedd6b266823b9a15bb84f4837/cgroup.procs: invalid argument\\\"\"\n\r\n command terminated with exit code 126\n [] <nil> 0xc003d19200 exit status 126 <nil> <nil> true [0xc002c66fc8 0xc002c66fe0 0xc002c66ff8] [0xc002c66fc8 0xc002c66fe0 0xc002c66ff8] [0xc002c66fd8 0xc002c66ff0] [0x9d17b0 0x9d17b0] 0xc0027778c0 <nil>}:\nCommand stdout:\nrpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused \"process_linux.go:87: adding pid 31237 to cgroups caused \\\"failed to write 31237 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod7a88a2e2-bec9-4421-9898-a013c1918591/1dbc7153c163fa73464a6951c91e5e354efee5fedd6b266823b9a15bb84f4837/cgroup.procs: invalid argument\\\"\"\n\r\n\nstderr:\ncommand terminated with exit code 126\n\nerror:\nexit status 126",
        },
        Code: 126,
    }
    error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.230.28.255 --kubeconfig=/workspace/.kube/config exec --namespace=provisioning-2332 hostexec-test-c546202c9f-minion-group-zfw6 -- nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c E2E_LOOP_DEV=$(sudo losetup | grep /tmp/local-driver-5e8c93a0-cd65-4fe9-8a37-5a49b71f4471/file | awk '{ print $1 }') 2>&1 > /dev/null && echo ${E2E_LOOP_DEV}] []  <nil> rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:87: adding pid 31237 to cgroups caused \"failed to write 31237 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod7a88a2e2-bec9-4421-9898-a013c1918591/1dbc7153c163fa73464a6951c91e5e354efee5fedd6b266823b9a15bb84f4837/cgroup.procs: invalid argument\""
    
     command terminated with exit code 126
     [] <nil> 0xc003d19200 exit status 126 <nil> <nil> true [0xc002c66fc8 0xc002c66fe0 0xc002c66ff8] [0xc002c66fc8 0xc002c66fe0 0xc002c66ff8] [0xc002c66fd8 0xc002c66ff0] [0x9d17b0 0x9d17b0] 0xc0027778c0 <nil>}:
    Command stdout:
    rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "process_linux.go:87: adding pid 31237 to cgroups caused \"failed to write 31237 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/kubepods/besteffort/pod7a88a2e2-bec9-4421-9898-a013c1918591/1dbc7153c163fa73464a6951c91e5e354efee5fedd6b266823b9a15bb84f4837/cgroup.procs: invalid argument\""
    
    
    stderr:
    command terminated with exit code 126
    
    error:
    exit status 126
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:140
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files


Test 6h34m

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files


Show 176 Passed Tests

Show 4251 Skipped Tests