This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 5 failed / 210 succeeded
Started2019-06-03 00:55
Elapsed48m29s
Revision
Buildergke-prow-disk-large-pool-2-99d82bdc-zxbx
pod2245763d-859a-11e9-952b-0a580a2c5704
infra-commitba215daf8
job-versionv1.14.3-beta.0.32+051c16a0058667
pod2245763d-859a-11e9-952b-0a580a2c5704
repok8s.io/kubernetes
repo-commit051c16a0058667f182f895c7ad0f1bb623b65911
repos{u'k8s.io/kubernetes': u'release-1.14'}
revisionv1.14.3-beta.0.32+051c16a0058667

Test Failures


Node Tests 45m50s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=gke-os-images-testing-06 --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --skip="\[Flaky\]|\[Serial\]|\[NodeAlphaFeature:.+\]" --test_args= --test-timeout=2h0m0s --instance-metadata=user-data<test/e2e_node/jenkins/gci-init.yaml,gci-update-strategy=update_disabled --images=cos-u-73-11647-192-0 --image-project=gke-node-images-test: exit status 1
				from junit_runner.xml

Filter through log files


[k8s.io] ResourceMetricsAPI when querying /resource/metrics should report resource usage through the v1alpha1 resouce metrics api 2m54s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=ResourceMetricsAPI\swhen\squerying\s\/resource\/metrics\sshould\sreport\sresource\susage\sthrough\sthe\sv1alpha1\sresouce\smetrics\sapi$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_metrics_test.go:65
Timed out after 60.000s.
Expected
    <string>: KubeletMetrics
to match keys: {
[."container_memory_working_set_bytes":
	unexpected element kubelet-test-9080::busybox-readonly-fs33580bc6-859b-11e9-8be5-42010a8a0015::busybox-readonly-fs33580bc6-859b-11e9-8be5-42010a8a0015, ."container_memory_working_set_bytes":
	unexpected element container-probe-8544::liveness-http::liveness, ."container_memory_working_set_bytes":
	unexpected element container-probe-4096::test-webserver-c6ce2bf7-859b-11e9-8f98-42010a8a0015::test-webserver]
}

_output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_metrics_test.go:91
				
				Click to see stdout/stderrfrom junit_cos-u-73-11647-192-0_04.xml

Filter through log files


[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] 17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Security\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\snot\sexplicitly\sset\sand\suid\s\!\=\s0\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:236
wait for pod "alpine-nnp-nil-5f6d71dd-859b-11e9-b02b-42010a8a0015" to success
Expected success, but got an error:
    <*errors.errorString | 0xc000e043d0>: {
        s: "pod \"alpine-nnp-nil-5f6d71dd-859b-11e9-b02b-42010a8a0015\" failed with reason: \"\", message: \"\"",
    }
    pod "alpine-nnp-nil-5f6d71dd-859b-11e9-b02b-42010a8a0015" failed with reason: "", message: ""
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:229
				
				Click to see stdout/stderrfrom junit_cos-u-73-11647-192-0_04.xml

Find alpine-nnp-nil-5f6d71dd-859b-11e9-b02b-42010a8a0015 mentions in log files


[k8s.io] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api 1m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Summary\sAPI\s\[NodeConformance\]\swhen\squerying\s\/stats\/summary\sshould\sreport\sresource\susage\sthrough\sthe\sstats\sapi$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:52
Failed after 15.026s.
Expected
    <string>: Summary
to match fields: {
.Node.SystemContainers[pods].CPU:
	Expected
	    <string>: CPUStats
	to match fields: {
	.UsageNanoCores:
		Expected
		    <uint64>: 0
		to be >=
		    <int>: 10000
	}
	
}

_output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:334
				
				Click to see stdout/stderrfrom junit_cos-u-73-11647-192-0_06.xml

Filter through log files


[sig-storage] EmptyDir volumes pod should support shared volumes between containers 11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sEmptyDir\svolumes\spod\sshould\ssupport\sshared\svolumes\sbetween\scontainers$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:210
Unexpected error:
    <*errors.errorString | 0xc000cee3a0>: {
        s: "error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-91441447-859b-11e9-bc12-42010a8a0015 -c busybox-main-container --namespace=emptydir-2061 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc000f26140 0xc000f26158 0xc000f26170] [0xc000f26140 0xc000f26158 0xc000f26170] [0xc000f26150 0xc000f26168] [0xed6c20 0xed6c20] <nil> <nil>}:\nCommand stdout:\n\nstderr:\n\nerror:\nfork/exec : no such file or directory\n",
    }
    error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-91441447-859b-11e9-bc12-42010a8a0015 -c busybox-main-container --namespace=emptydir-2061 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc000f26140 0xc000f26158 0xc000f26170] [0xc000f26140 0xc000f26158 0xc000f26170] [0xc000f26150 0xc000f26168] [0xed6c20 0xed6c20] <nil> <nil>}:
    Command stdout:
    
    stderr:
    
    error:
    fork/exec : no such file or directory
    
occurred
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2306
				
				Click to see stdout/stderrfrom junit_cos-u-73-11647-192-0_03.xml

Filter through log files


Show 210 Passed Tests

Show 91 Skipped Tests