This job view page is being replaced by Spyglass soon. Check out the new job view.
PRjiayingz: Update nvidia-gpu-device-plugin addon.
ResultFAILURE
Tests 1 failed / 475 succeeded
Started2019-02-11 23:54
Elapsed20m2s
Revision
Buildergke-prow-containerd-pool-99179761-jlg7
Refs master:805a9e70
73940:52e92ab4
pod50ca4a71-2e58-11e9-aa96-0a580a6c0714
infra-commit49d8112d8
job-versionv1.14.0-alpha.2.538+cb059fb69b122f
pod50ca4a71-2e58-11e9-aa96-0a580a6c0714
repok8s.io/kubernetes
repo-commitcb059fb69b122f6fd0f92e3effe568d61f4fc3bd
repos{u'k8s.io/kubernetes': u'master:805a9e703698d0a8a86f405f861f9e3fd91b29c6,73940:52e92ab4b9f4d1e868c96090c49485edfad4d72d'}
revisionv1.14.0-alpha.2.538+cb059fb69b122f

Test Failures


Node Tests 18m53s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 475 Passed Tests

Show 388 Skipped Tests

Error lines from build-log.txt

... skipping 307 lines ...
W0211 23:59:24.339] I0211 23:59:24.339181    4523 utils.go:117] Killing any existing node processes on "tmp-node-e2e-96abaafd-ubuntu-gke-1804-d1703-0-v20181113"
W0211 23:59:24.561] I0211 23:59:24.561681    4523 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0211 23:59:24.562] I0211 23:59:24.561720    4523 node_e2e.go:164] Starting tests on "tmp-node-e2e-96abaafd-cos-stable-63-10032-71-0"
W0211 23:59:24.700] I0211 23:59:24.700149    4523 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0211 23:59:24.700] I0211 23:59:24.700184    4523 node_e2e.go:164] Starting tests on "tmp-node-e2e-96abaafd-cos-stable-60-9592-84-0"
W0211 23:59:25.755] I0211 23:59:25.755638    4523 node_e2e.go:164] Starting tests on "tmp-node-e2e-96abaafd-ubuntu-gke-1804-d1703-0-v20181113"
W0212 00:01:06.191] I0212 00:01:06.190677    4523 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0212 00:01:06.878] I0212 00:01:06.878281    4523 remote.go:202] Got the system logs from journald; copying it back...
W0212 00:01:07.870] I0212 00:01:07.870450    4523 remote.go:122] Copying test artifacts from "tmp-node-e2e-96abaafd-cos-stable-60-9592-84-0"
W0212 00:01:09.735] I0212 00:01:09.735000    4523 run_remote.go:717] Deleting instance "tmp-node-e2e-96abaafd-cos-stable-60-9592-84-0"
I0212 00:01:10.460] 
I0212 00:01:10.460] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0212 00:01:10.460] >                              START TEST                                >
... skipping 46 lines ...
I0212 00:01:10.465] Validating docker...
I0212 00:01:10.465] DOCKER_VERSION: 1.13.1
I0212 00:01:10.465] DOCKER_GRAPH_DRIVER: overlay2
I0212 00:01:10.465] PASS
I0212 00:01:10.465] I0211 23:59:29.664809    1290 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0212 00:01:10.466] I0211 23:59:29.664836    1290 image_list.go:131] Pre-pulling images with docker [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.1 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.4.1 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0212 00:01:10.466] W0212 00:00:01.966544    1290 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/hostexec:1.1 as user "root", retrying in 1s (1 of 5): exit status 1
I0212 00:01:10.466] W0212 00:00:17.993257    1290 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/hostexec:1.1 as user "root", retrying in 1s (2 of 5): exit status 1
I0212 00:01:10.467] W0212 00:00:34.039233    1290 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/hostexec:1.1 as user "root", retrying in 1s (3 of 5): exit status 1
I0212 00:01:10.467] W0212 00:00:50.070282    1290 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/hostexec:1.1 as user "root", retrying in 1s (4 of 5): exit status 1
I0212 00:01:10.467] W0212 00:01:06.103047    1290 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/hostexec:1.1 as user "root", retrying in 1s (5 of 5): exit status 1
I0212 00:01:10.467] W0212 00:01:06.103070    1290 image_list.go:148] Could not pre-pull image gcr.io/kubernetes-e2e-test-images/hostexec:1.1 exit status 1 output: Pulling repository gcr.io/kubernetes-e2e-test-images/hostexec
I0212 00:01:10.467] unauthorized: authentication required
I0212 00:01:10.467] 
I0212 00:01:10.467] 
I0212 00:01:10.467] Failure [96.853 seconds]
I0212 00:01:10.467] [BeforeSuite] BeforeSuite 
I0212 00:01:10.467] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.468] 
I0212 00:01:10.468]   Expected error:
I0212 00:01:10.468]       <*exec.ExitError | 0xc000b5e780>: {
I0212 00:01:10.468]           ProcessState: {
I0212 00:01:10.468]               pid: 1451,
I0212 00:01:10.468]               status: 256,
I0212 00:01:10.468]               rusage: {
I0212 00:01:10.468]                   Utime: {Sec: 0, Usec: 4000},
... skipping 22 lines ...
I0212 00:01:10.470]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:151
I0212 00:01:10.470] ------------------------------
I0212 00:01:10.470] Failure [96.875 seconds]
I0212 00:01:10.470] [BeforeSuite] BeforeSuite 
I0212 00:01:10.470] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.470] 
I0212 00:01:10.470]   BeforeSuite on Node 1 failed
I0212 00:01:10.470] 
I0212 00:01:10.470]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.471] ------------------------------
I0212 00:01:10.471] Failure [96.769 seconds]
I0212 00:01:10.471] [BeforeSuite] BeforeSuite 
I0212 00:01:10.471] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.471] 
I0212 00:01:10.471]   BeforeSuite on Node 1 failed
I0212 00:01:10.471] 
I0212 00:01:10.471]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.471] ------------------------------
I0212 00:01:10.471] Failure [96.814 seconds]
I0212 00:01:10.471] [BeforeSuite] BeforeSuite 
I0212 00:01:10.471] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.471] 
I0212 00:01:10.472]   BeforeSuite on Node 1 failed
I0212 00:01:10.472] 
I0212 00:01:10.472]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.472] ------------------------------
I0212 00:01:10.472] Failure [96.754 seconds]
I0212 00:01:10.472] [BeforeSuite] BeforeSuite 
I0212 00:01:10.472] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.472] 
I0212 00:01:10.472]   BeforeSuite on Node 1 failed
I0212 00:01:10.472] 
I0212 00:01:10.472]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.472] ------------------------------
I0212 00:01:10.473] Failure [96.813 seconds]
I0212 00:01:10.473] [BeforeSuite] BeforeSuite 
I0212 00:01:10.473] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.473] 
I0212 00:01:10.473]   BeforeSuite on Node 1 failed
I0212 00:01:10.473] 
I0212 00:01:10.473]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.473] ------------------------------
I0212 00:01:10.473] Failure [96.783 seconds]
I0212 00:01:10.473] [BeforeSuite] BeforeSuite 
I0212 00:01:10.473] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.473] 
I0212 00:01:10.473]   BeforeSuite on Node 1 failed
I0212 00:01:10.474] 
I0212 00:01:10.474]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.474] ------------------------------
I0212 00:01:10.474] Failure [96.800 seconds]
I0212 00:01:10.474] [BeforeSuite] BeforeSuite 
I0212 00:01:10.474] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.474] 
I0212 00:01:10.474]   BeforeSuite on Node 1 failed
I0212 00:01:10.474] 
I0212 00:01:10.474]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0212 00:01:10.474] ------------------------------
I0212 00:01:10.474] I0212 00:01:06.156199    1290 e2e_node_suite_test.go:190] Tests Finished
I0212 00:01:10.475] 
I0212 00:01:10.475] 
I0212 00:01:10.475] Ran 2288 of 0 Specs in 96.919 seconds
I0212 00:01:10.475] FAIL! -- 0 Passed | 2288 Failed | 0 Flaked | 0 Pending | 0 Skipped 
I0212 00:01:10.475] 
I0212 00:01:10.475] Ginkgo ran 1 suite in 1m40.847464018s
I0212 00:01:10.475] Test Suite Failed
I0212 00:01:10.475] 
I0212 00:01:10.475] Failure Finished Test Suite on Host tmp-node-e2e-96abaafd-cos-stable-60-9592-84-0
I0212 00:01:10.476] [command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.230.2.171 -- sudo sh -c 'cd /tmp/node-e2e-20190211T235912 && timeout -k 30s 3900.000000s ./ginkgo --nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 ./e2e_node.test -- --system-spec-name= --system-spec-file= --logtostderr --v 4 --node-name=tmp-node-e2e-96abaafd-cos-stable-60-9592-84-0 --report-dir=/tmp/node-e2e-20190211T235912/results --report-prefix=cos-stable2 --image-description="cos-stable-60-9592-84-0" --kubelet-flags=--experimental-mounter-path=/tmp/node-e2e-20190211T235912/mounter --kubelet-flags=--experimental-kernel-memcg-notification=true --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"'] failed with error: exit status 1, command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r prow@35.230.2.171:/tmp/node-e2e-20190211T235912/results/*.log /workspace/_artifacts/tmp-node-e2e-96abaafd-cos-stable-60-9592-84-0] failed with error: exit status 1]
I0212 00:01:10.476] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0212 00:01:10.476] <                              FINISH TEST                               <
I0212 00:01:10.476] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0212 00:01:10.476] 
W0212 00:11:44.575] I0212 00:11:44.575123    4523 remote.go:122] Copying test artifacts from "tmp-node-e2e-96abaafd-coreos-beta-1883-1-0-v20180911"
W0212 00:11:50.280] I0212 00:11:50.280552    4523 run_remote.go:717] Deleting instance "tmp-node-e2e-96abaafd-coreos-beta-1883-1-0-v20180911"
... skipping 355 lines ...
I0212 00:11:51.123]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0212 00:11:51.124] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0212 00:11:51.124]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0212 00:11:51.124] Feb 12 00:01:04.876: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-4a7a8eb0-2e59-11e9-9864-42010a8a0020" in namespace "security-context-test-5275" to be "success or failure"
I0212 00:11:51.124] Feb 12 00:01:04.887: INFO: Pod "busybox-readonly-true-4a7a8eb0-2e59-11e9-9864-42010a8a0020": Phase="Pending", Reason="", readiness=false. Elapsed: 11.260929ms
I0212 00:11:51.124] Feb 12 00:01:06.920: INFO: Pod "busybox-readonly-true-4a7a8eb0-2e59-11e9-9864-42010a8a0020": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044313238s
I0212 00:11:51.124] Feb 12 00:01:08.956: INFO: Pod "busybox-readonly-true-4a7a8eb0-2e59-11e9-9864-42010a8a0020": Phase="Failed", Reason="", readiness=false. Elapsed: 4.079886655s
I0212 00:11:51.124] Feb 12 00:01:08.956: INFO: Pod "busybox-readonly-true-4a7a8eb0-2e59-11e9-9864-42010a8a0020" satisfied condition "success or failure"
I0212 00:11:51.124] [AfterEach] [k8s.io] Security Context
I0212 00:11:51.125]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:11:51.125] Feb 12 00:01:08.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0212 00:11:51.125] STEP: Destroying namespace "security-context-test-5275" for this suite.
I0212 00:11:51.125] Feb 12 00:01:15.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1465 lines ...
I0212 00:11:51.297] STEP: Creating a kubernetes client
I0212 00:11:51.297] STEP: Building a namespace api object, basename container-runtime
I0212 00:11:51.297] Feb 12 00:03:08.065: INFO: Skipping waiting for service account
I0212 00:11:51.297] [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
I0212 00:11:51.297]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:205
I0212 00:11:51.297] STEP: create the container
I0212 00:11:51.298] STEP: wait for the container to reach Failed
I0212 00:11:51.298] STEP: get the container status
I0212 00:11:51.298] STEP: the container should be terminated
I0212 00:11:51.298] STEP: the termination message should be set
I0212 00:11:51.298] STEP: delete the container
I0212 00:11:51.298] [AfterEach] [k8s.io] Container Runtime
I0212 00:11:51.298]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
... skipping 852 lines ...
I0212 00:11:51.410] STEP: verifying the pod is in kubernetes
I0212 00:11:51.410] STEP: updating the pod
I0212 00:11:51.410] Feb 12 00:04:15.213: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ba6ef62b-2e59-11e9-9864-42010a8a0020"
I0212 00:11:51.410] Feb 12 00:04:15.213: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ba6ef62b-2e59-11e9-9864-42010a8a0020" in namespace "pods-9001" to be "terminated due to deadline exceeded"
I0212 00:11:51.411] Feb 12 00:04:15.216: INFO: Pod "pod-update-activedeadlineseconds-ba6ef62b-2e59-11e9-9864-42010a8a0020": Phase="Running", Reason="", readiness=true. Elapsed: 2.205354ms
I0212 00:11:51.411] Feb 12 00:04:17.224: INFO: Pod "pod-update-activedeadlineseconds-ba6ef62b-2e59-11e9-9864-42010a8a0020": Phase="Running", Reason="", readiness=true. Elapsed: 2.010281234s
I0212 00:11:51.411] Feb 12 00:04:19.230: INFO: Pod "pod-update-activedeadlineseconds-ba6ef62b-2e59-11e9-9864-42010a8a0020": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.01619593s
I0212 00:11:51.411] Feb 12 00:04:19.230: INFO: Pod "pod-update-activedeadlineseconds-ba6ef62b-2e59-11e9-9864-42010a8a0020" satisfied condition "terminated due to deadline exceeded"
I0212 00:11:51.411] [AfterEach] [k8s.io] Pods
I0212 00:11:51.411]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:11:51.411] Feb 12 00:04:19.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0212 00:11:51.412] STEP: Destroying namespace "pods-9001" for this suite.
I0212 00:11:51.412] Feb 12 00:04:25.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 428 lines ...
I0212 00:11:51.461]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0212 00:11:51.461] STEP: Creating a kubernetes client
I0212 00:11:51.461] STEP: Building a namespace api object, basename init-container
I0212 00:11:51.461] Feb 12 00:05:07.631: INFO: Skipping waiting for service account
I0212 00:11:51.462] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0212 00:11:51.462]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0212 00:11:51.462] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0212 00:11:51.462]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0212 00:11:51.462] STEP: creating the pod
I0212 00:11:51.462] Feb 12 00:05:07.631: INFO: PodSpec: initContainers in spec.initContainers
I0212 00:11:51.462] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0212 00:11:51.463]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:11:51.463] Feb 12 00:05:09.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0212 00:11:51.463] Feb 12 00:05:17.902: INFO: namespace init-container-5403 deletion completed in 8.137089834s
I0212 00:11:51.463] 
I0212 00:11:51.463] 
I0212 00:11:51.463] • [SLOW TEST:10.275 seconds]
I0212 00:11:51.464] [k8s.io] InitContainer [NodeConformance]
I0212 00:11:51.464] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0212 00:11:51.464]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0212 00:11:51.464]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0212 00:11:51.464] ------------------------------
I0212 00:11:51.464] SSSSS
I0212 00:11:51.464] ------------------------------
I0212 00:11:51.465] [BeforeEach] [sig-node] Downward API
I0212 00:11:51.465]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
... skipping 496 lines ...
I0212 00:11:51.521]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0212 00:11:51.521] STEP: Creating a kubernetes client
I0212 00:11:51.521] STEP: Building a namespace api object, basename init-container
I0212 00:11:51.521] Feb 12 00:05:13.163: INFO: Skipping waiting for service account
I0212 00:11:51.521] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0212 00:11:51.521]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0212 00:11:51.521] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0212 00:11:51.522]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0212 00:11:51.522] STEP: creating the pod
I0212 00:11:51.522] Feb 12 00:05:13.163: INFO: PodSpec: initContainers in spec.initContainers
I0212 00:11:51.526] Feb 12 00:05:57.548: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-de8365bd-2e59-11e9-9f42-42010a8a0020", GenerateName:"", Namespace:"init-container-4350", SelfLink:"/api/v1/namespaces/init-container-4350/pods/pod-init-de8365bd-2e59-11e9-9f42-42010a8a0020", UID:"de8b6af7-2e59-11e9-8bcc-42010a8a0020", ResourceVersion:"2399", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685526713, loc:(*time.Location)(0xa2319e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"163107978"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000a4dab0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-96abaafd-coreos-beta-1883-1-0-v20180911", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001011260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a4db20)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a4db40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000a4db50), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000a4db54)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685526713, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685526713, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685526713, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685526713, loc:(*time.Location)(0xa2319e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.32", PodIP:"10.100.0.114", StartTime:(*v1.Time)(0xc00088dea0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000215650)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0002156c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://773db09628c1a8a32dbf77ff2e5c954fef2993ca83ef9f2ad3967e746bbea7f5"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00088df00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00088df40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0212 00:11:51.526] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0212 00:11:51.526]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:11:51.526] Feb 12 00:05:57.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0212 00:11:51.526] STEP: Destroying namespace "init-container-4350" for this suite.
I0212 00:11:51.526] Feb 12 00:06:19.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0212 00:11:51.527] Feb 12 00:06:19.632: INFO: namespace init-container-4350 deletion completed in 22.07194507s
I0212 00:11:51.527] 
I0212 00:11:51.527] 
I0212 00:11:51.527] • [SLOW TEST:66.472 seconds]
I0212 00:11:51.527] [k8s.io] InitContainer [NodeConformance]
I0212 00:11:51.527] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0212 00:11:51.527]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0212 00:11:51.527]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0212 00:11:51.527] ------------------------------
I0212 00:11:51.527] [BeforeEach] [k8s.io] Docker Containers
I0212 00:11:51.528]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0212 00:11:51.528] STEP: Creating a kubernetes client
I0212 00:11:51.528] STEP: Building a namespace api object, basename containers
... skipping 1439 lines ...
I0212 00:11:51.673] Feb 12 00:02:49.975: INFO: Skipping waiting for service account
I0212 00:11:51.673] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0212 00:11:51.673]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0212 00:11:51.673] STEP: create the container
I0212 00:11:51.673] STEP: check the container status
I0212 00:11:51.673] STEP: delete the container
I0212 00:11:51.673] Feb 12 00:07:50.152: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0212 00:11:51.673] STEP: create the container
I0212 00:11:51.674] STEP: check the container status
I0212 00:11:51.674] STEP: delete the container
I0212 00:11:51.674] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0212 00:11:51.674]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:11:51.674] Feb 12 00:07:53.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 105 lines ...
I0212 00:11:51.684] I0212 00:11:43.668237    1306 server.go:295] Killing process 2060 (services) with -TERM
I0212 00:11:51.684] I0212 00:11:43.842356    1306 server.go:258] Kill server "kubelet"
I0212 00:11:51.684] I0212 00:11:43.850784    1306 services.go:146] Fetching log files...
I0212 00:11:51.684] I0212 00:11:43.850862    1306 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190211T235912.service].
I0212 00:11:51.685] I0212 00:11:44.039179    1306 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0212 00:11:51.685] I0212 00:11:44.061902    1306 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0212 00:11:51.685] E0212 00:11:44.066134    1306 services.go:158] failed to get "cloud-init.log" from journald: Failed to add filter for units: No data available
I0212 00:11:51.685] , exit status 1
I0212 00:11:51.685] I0212 00:11:44.066163    1306 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0212 00:11:51.685] I0212 00:11:44.077200    1306 e2e_node_suite_test.go:190] Tests Finished
I0212 00:11:51.685] 
I0212 00:11:51.685] 
I0212 00:11:51.685] Ran 156 of 284 Specs in 736.825 seconds
I0212 00:11:51.685] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 128 Skipped 
I0212 00:11:51.686] 
I0212 00:11:51.686] Ginkgo ran 1 suite in 12m19.118178816s
I0212 00:11:51.686] Test Suite Passed
I0212 00:11:51.686] 
I0212 00:11:51.686] Success Finished Test Suite on Host tmp-node-e2e-96abaafd-coreos-beta-1883-1-0-v20180911
I0212 00:11:51.686] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 534 lines ...
I0212 00:13:34.388]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0212 00:13:34.388] STEP: Creating a kubernetes client
I0212 00:13:34.388] STEP: Building a namespace api object, basename init-container
I0212 00:13:34.388] Feb 12 00:01:16.025: INFO: Skipping waiting for service account
I0212 00:13:34.389] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0212 00:13:34.389]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0212 00:13:34.389] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0212 00:13:34.389]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0212 00:13:34.389] STEP: creating the pod
I0212 00:13:34.389] Feb 12 00:01:16.025: INFO: PodSpec: initContainers in spec.initContainers
I0212 00:13:34.389] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0212 00:13:34.389]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:13:34.389] Feb 12 00:01:18.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0212 00:13:34.390] Feb 12 00:01:25.079: INFO: namespace init-container-2193 deletion completed in 6.209977201s
I0212 00:13:34.390] 
I0212 00:13:34.390] 
I0212 00:13:34.390] • [SLOW TEST:9.073 seconds]
I0212 00:13:34.390] [k8s.io] InitContainer [NodeConformance]
I0212 00:13:34.390] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0212 00:13:34.390]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0212 00:13:34.390]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0212 00:13:34.390] ------------------------------
I0212 00:13:34.391] [BeforeEach] [sig-storage] Downward API volume
I0212 00:13:34.391]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0212 00:13:34.391] STEP: Creating a kubernetes client
I0212 00:13:34.391] STEP: Building a namespace api object, basename downward-api
... skipping 411 lines ...
I0212 00:13:34.433] STEP: Creating a kubernetes client
I0212 00:13:34.433] STEP: Building a namespace api object, basename init-container
I0212 00:13:34.433] Feb 12 00:00:43.324: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
I0212 00:13:34.433] Feb 12 00:00:43.324: INFO: Skipping waiting for service account
I0212 00:13:34.433] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0212 00:13:34.433]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0212 00:13:34.434] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0212 00:13:34.434]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0212 00:13:34.434] STEP: creating the pod
I0212 00:13:34.434] Feb 12 00:00:43.324: INFO: PodSpec: initContainers in spec.initContainers
I0212 00:13:34.438] Feb 12 00:01:35.542: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3dad5982-2e59-11e9-a645-42010a8a0021", GenerateName:"", Namespace:"init-container-5551", SelfLink:"/api/v1/namespaces/init-container-5551/pods/pod-init-3dad5982-2e59-11e9-a645-42010a8a0021", UID:"3db1bebf-2e59-11e9-8bf2-42010a8a0021", ResourceVersion:"461", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685526443, loc:(*time.Location)(0xa2319e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"324867215", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0001e0750), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-96abaafd-cos-stable-63-10032-71-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000ffc660), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0001e07d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0001e07f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0001e0800), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0001e0804)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685526443, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685526443, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685526443, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685526443, loc:(*time.Location)(0xa2319e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.33", PodIP:"10.100.0.2", StartTime:(*v1.Time)(0xc00063f240), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0010a7570)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0010a75e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://9b4d65538f9d1137950022edaf1e7a33ef0e7fd59c88922ec4681c10e297b147"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00063f2a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00063f2e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0212 00:13:34.438] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0212 00:13:34.438]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:13:34.438] Feb 12 00:01:35.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0212 00:13:34.438] STEP: Destroying namespace "init-container-5551" for this suite.
I0212 00:13:34.439] Feb 12 00:01:57.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0212 00:13:34.439] Feb 12 00:01:57.614: INFO: namespace init-container-5551 deletion completed in 22.058589408s
I0212 00:13:34.439] 
I0212 00:13:34.439] 
I0212 00:13:34.439] • [SLOW TEST:74.357 seconds]
I0212 00:13:34.439] [k8s.io] InitContainer [NodeConformance]
I0212 00:13:34.439] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0212 00:13:34.440]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0212 00:13:34.440]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0212 00:13:34.440] ------------------------------
I0212 00:13:34.440] [BeforeEach] [sig-storage] ConfigMap
I0212 00:13:34.440]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0212 00:13:34.440] STEP: Creating a kubernetes client
I0212 00:13:34.440] STEP: Building a namespace api object, basename configmap
... skipping 830 lines ...
I0212 00:13:34.523] STEP: Creating a kubernetes client
I0212 00:13:34.523] STEP: Building a namespace api object, basename container-runtime
I0212 00:13:34.523] Feb 12 00:03:31.425: INFO: Skipping waiting for service account
I0212 00:13:34.523] [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
I0212 00:13:34.523]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:205
I0212 00:13:34.523] STEP: create the container
I0212 00:13:34.523] STEP: wait for the container to reach Failed
I0212 00:13:34.523] STEP: get the container status
I0212 00:13:34.523] STEP: the container should be terminated
I0212 00:13:34.524] STEP: the termination message should be set
I0212 00:13:34.524] STEP: delete the container
I0212 00:13:34.524] [AfterEach] [k8s.io] Container Runtime
I0212 00:13:34.524]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
... skipping 208 lines ...
I0212 00:13:34.544] [BeforeEach] [k8s.io] Security Context
I0212 00:13:34.544]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0212 00:13:34.544] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0212 00:13:34.544]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0212 00:13:34.545] Feb 12 00:03:41.582: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-a7e4ab67-2e59-11e9-9c6e-42010a8a0021" in namespace "security-context-test-6784" to be "success or failure"
I0212 00:13:34.545] Feb 12 00:03:41.590: INFO: Pod "busybox-readonly-true-a7e4ab67-2e59-11e9-9c6e-42010a8a0021": Phase="Pending", Reason="", readiness=false. Elapsed: 7.911211ms
I0212 00:13:34.545] Feb 12 00:03:43.594: INFO: Pod "busybox-readonly-true-a7e4ab67-2e59-11e9-9c6e-42010a8a0021": Phase="Failed", Reason="", readiness=false. Elapsed: 2.012157773s
I0212 00:13:34.545] Feb 12 00:03:43.594: INFO: Pod "busybox-readonly-true-a7e4ab67-2e59-11e9-9c6e-42010a8a0021" satisfied condition "success or failure"
I0212 00:13:34.545] [AfterEach] [k8s.io] Security Context
I0212 00:13:34.545]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:13:34.545] Feb 12 00:03:43.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0212 00:13:34.545] STEP: Destroying namespace "security-context-test-6784" for this suite.
I0212 00:13:34.546] Feb 12 00:03:49.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 2977 lines ...
I0212 00:13:34.911] STEP: submitting the pod to kubernetes
I0212 00:13:34.912] STEP: verifying the pod is in kubernetes
I0212 00:13:34.912] STEP: updating the pod
I0212 00:13:34.912] Feb 12 00:08:36.669: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5515ad3c-2e5a-11e9-8003-42010a8a0021"
I0212 00:13:34.912] Feb 12 00:08:36.669: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5515ad3c-2e5a-11e9-8003-42010a8a0021" in namespace "pods-4467" to be "terminated due to deadline exceeded"
I0212 00:13:34.912] Feb 12 00:08:36.671: INFO: Pod "pod-update-activedeadlineseconds-5515ad3c-2e5a-11e9-8003-42010a8a0021": Phase="Running", Reason="", readiness=true. Elapsed: 1.74166ms
I0212 00:13:34.913] Feb 12 00:08:38.672: INFO: Pod "pod-update-activedeadlineseconds-5515ad3c-2e5a-11e9-8003-42010a8a0021": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.003665974s
I0212 00:13:34.913] Feb 12 00:08:38.672: INFO: Pod "pod-update-activedeadlineseconds-5515ad3c-2e5a-11e9-8003-42010a8a0021" satisfied condition "terminated due to deadline exceeded"
I0212 00:13:34.913] [AfterEach] [k8s.io] Pods
I0212 00:13:34.913]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:13:34.913] Feb 12 00:08:38.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0212 00:13:34.913] STEP: Destroying namespace "pods-4467" for this suite.
I0212 00:13:34.913] Feb 12 00:08:44.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 133 lines ...
I0212 00:13:34.933] Feb 12 00:04:06.321: INFO: Skipping waiting for service account
I0212 00:13:34.933] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0212 00:13:34.933]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0212 00:13:34.933] STEP: create the container
I0212 00:13:34.933] STEP: check the container status
I0212 00:13:34.934] STEP: delete the container
I0212 00:13:34.934] Feb 12 00:09:06.471: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0212 00:13:34.934] STEP: create the container
I0212 00:13:34.934] STEP: check the container status
I0212 00:13:34.934] STEP: delete the container
I0212 00:13:34.934] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0212 00:13:34.934]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:13:34.935] Feb 12 00:09:09.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 19 lines ...
I0212 00:13:34.938] Feb 12 00:08:17.664: INFO: Skipping waiting for service account
I0212 00:13:34.938] [It] should not be able to pull from private registry without secret [NodeConformance]
I0212 00:13:34.938]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:302
I0212 00:13:34.938] STEP: create the container
I0212 00:13:34.938] STEP: check the container status
I0212 00:13:34.938] STEP: delete the container
I0212 00:13:34.938] Feb 12 00:13:18.357: INFO: No.1 attempt failed: expected container state: Waiting, got: "Running", retrying...
I0212 00:13:34.939] STEP: create the container
I0212 00:13:34.939] STEP: check the container status
I0212 00:13:34.939] STEP: delete the container
I0212 00:13:34.939] [AfterEach] [k8s.io] Container Runtime
I0212 00:13:34.939]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:13:34.939] Feb 12 00:13:20.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
I0212 00:13:34.941]       should not be able to pull from private registry without secret [NodeConformance]
I0212 00:13:34.941]       /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:302
I0212 00:13:34.942] ------------------------------
I0212 00:13:34.942] I0212 00:13:26.452068    1274 e2e_node_suite_test.go:185] Stopping node services...
I0212 00:13:34.942] I0212 00:13:26.452091    1274 server.go:258] Kill server "services"
I0212 00:13:34.942] I0212 00:13:26.452102    1274 server.go:295] Killing process 1786 (services) with -TERM
I0212 00:13:34.942] E0212 00:13:26.542635    1274 services.go:89] Failed to stop services: error stopping "services": waitid: no child processes
I0212 00:13:34.942] I0212 00:13:26.542655    1274 server.go:258] Kill server "kubelet"
I0212 00:13:34.943] I0212 00:13:26.552664    1274 services.go:146] Fetching log files...
I0212 00:13:34.943] I0212 00:13:26.552715    1274 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0212 00:13:34.943] I0212 00:13:26.691712    1274 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0212 00:13:34.943] I0212 00:13:27.302023    1274 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0212 00:13:34.943] I0212 00:13:27.339371    1274 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190211T235912.service].
I0212 00:13:34.943] I0212 00:13:28.371051    1274 e2e_node_suite_test.go:190] Tests Finished
I0212 00:13:34.944] 
I0212 00:13:34.944] 
I0212 00:13:34.944] Ran 156 of 286 Specs in 838.896 seconds
I0212 00:13:34.944] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 130 Skipped 
I0212 00:13:34.944] 
I0212 00:13:34.944] Ginkgo ran 1 suite in 14m3.19566218s
I0212 00:13:34.944] Test Suite Passed
I0212 00:13:34.944] 
I0212 00:13:34.945] Success Finished Test Suite on Host tmp-node-e2e-96abaafd-cos-stable-63-10032-71-0
I0212 00:13:34.945] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 54 lines ...
I0212 00:14:30.442] Validating docker...
I0212 00:14:30.442] DOCKER_VERSION: 17.03.2-ce
I0212 00:14:30.442] DOCKER_GRAPH_DRIVER: overlay2
I0212 00:14:30.442] PASS
I0212 00:14:30.443] I0211 23:59:29.135447    2694 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0212 00:14:30.443] I0211 23:59:29.135474    2694 image_list.go:131] Pre-pulling images with docker [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.1 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.4.1 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0212 00:14:30.444] W0212 00:00:04.192905    2694 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/hostexec:1.1 as user "root", retrying in 1s (1 of 5): exit status 1
I0212 00:14:30.444] W0212 00:00:21.157328    2694 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 as user "root", retrying in 1s (1 of 5): exit status 1
I0212 00:14:30.444] W0212 00:00:37.186830    2694 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 as user "root", retrying in 1s (2 of 5): exit status 1
I0212 00:14:30.444] W0212 00:01:10.991041    2694 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/net:1.0 as user "root", retrying in 1s (1 of 5): exit status 1
I0212 00:14:30.445] W0212 00:01:42.041400    2694 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/net:1.0 as user "root", retrying in 1s (2 of 5): exit status 1
I0212 00:14:30.445] I0212 00:04:09.950605    2694 kubelet.go:108] Starting kubelet
I0212 00:14:30.445] I0212 00:04:09.950722    2694 feature_gate.go:226] feature gates: &{map[]}
I0212 00:14:30.445] I0212 00:04:09.952582    2694 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run --unit=kubelet-20190211T235912.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20190211T235912/kubelet --kubeconfig /tmp/node-e2e-20190211T235912/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --allow-privileged=true --dynamic-config-dir /tmp/node-e2e-20190211T235912/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20190211T235912/cni/bin --cni-conf-dir /tmp/node-e2e-20190211T235912/cni/net.d --hostname-override tmp-node-e2e-96abaafd-ubuntu-gke-1804-d1703-0-v20181113 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20190211T235912/kubelet-config --experimental-kernel-memcg-notification=true --cgroups-per-qos=true --cgroup-root=/"
I0212 00:14:30.446] I0212 00:04:09.952613    2694 util.go:44] Running readiness check for service "kubelet"
I0212 00:14:30.446] I0212 00:04:09.952664    2694 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20190211T235912/results/kubelet.log
I0212 00:14:30.446] I0212 00:04:09.964823    2694 server.go:172] Running health check for service "kubelet"
... skipping 1374 lines ...
I0212 00:14:30.625] STEP: Creating a kubernetes client
I0212 00:14:30.625] STEP: Building a namespace api object, basename container-runtime
I0212 00:14:30.625] Feb 12 00:05:38.385: INFO: Skipping waiting for service account
I0212 00:14:30.625] [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
I0212 00:14:30.625]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:205
I0212 00:14:30.626] STEP: create the container
I0212 00:14:30.626] STEP: wait for the container to reach Failed
I0212 00:14:30.626] STEP: get the container status
I0212 00:14:30.626] STEP: the container should be terminated
I0212 00:14:30.626] STEP: the termination message should be set
I0212 00:14:30.626] STEP: delete the container
I0212 00:14:30.626] [AfterEach] [k8s.io] Container Runtime
I0212 00:14:30.626]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
... skipping 372 lines ...
I0212 00:14:30.666]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0212 00:14:30.666] STEP: Creating a kubernetes client
I0212 00:14:30.666] STEP: Building a namespace api object, basename init-container
I0212 00:14:30.667] Feb 12 00:06:06.165: INFO: Skipping waiting for service account
I0212 00:14:30.667] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0212 00:14:30.667]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0212 00:14:30.667] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0212 00:14:30.667]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0212 00:14:30.667] STEP: creating the pod
I0212 00:14:30.667] Feb 12 00:06:06.165: INFO: PodSpec: initContainers in spec.initContainers
I0212 00:14:30.667] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0212 00:14:30.667]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:14:30.667] Feb 12 00:06:09.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0212 00:14:30.668] Feb 12 00:06:15.833: INFO: namespace init-container-7580 deletion completed in 6.069138694s
I0212 00:14:30.668] 
I0212 00:14:30.668] 
I0212 00:14:30.668] • [SLOW TEST:9.672 seconds]
I0212 00:14:30.668] [k8s.io] InitContainer [NodeConformance]
I0212 00:14:30.668] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0212 00:14:30.668]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0212 00:14:30.668]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0212 00:14:30.668] ------------------------------
I0212 00:14:30.668] [BeforeEach] [k8s.io] Container Runtime
I0212 00:14:30.669]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0212 00:14:30.669] STEP: Creating a kubernetes client
I0212 00:14:30.669] STEP: Building a namespace api object, basename container-runtime
... skipping 236 lines ...
I0212 00:14:30.692] STEP: verifying the pod is in kubernetes
I0212 00:14:30.692] STEP: updating the pod
I0212 00:14:30.693] Feb 12 00:06:33.042: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0c96945f-2e5a-11e9-89f7-42010a8a0023"
I0212 00:14:30.693] Feb 12 00:06:33.042: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0c96945f-2e5a-11e9-89f7-42010a8a0023" in namespace "pods-6943" to be "terminated due to deadline exceeded"
I0212 00:14:30.693] Feb 12 00:06:33.047: INFO: Pod "pod-update-activedeadlineseconds-0c96945f-2e5a-11e9-89f7-42010a8a0023": Phase="Running", Reason="", readiness=true. Elapsed: 4.796149ms
I0212 00:14:30.693] Feb 12 00:06:35.049: INFO: Pod "pod-update-activedeadlineseconds-0c96945f-2e5a-11e9-89f7-42010a8a0023": Phase="Running", Reason="", readiness=true. Elapsed: 2.006974092s
I0212 00:14:30.693] Feb 12 00:06:37.051: INFO: Pod "pod-update-activedeadlineseconds-0c96945f-2e5a-11e9-89f7-42010a8a0023": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.009453601s
I0212 00:14:30.693] Feb 12 00:06:37.051: INFO: Pod "pod-update-activedeadlineseconds-0c96945f-2e5a-11e9-89f7-42010a8a0023" satisfied condition "terminated due to deadline exceeded"
I0212 00:14:30.693] [AfterEach] [k8s.io] Pods
I0212 00:14:30.694]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:14:30.694] Feb 12 00:06:37.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0212 00:14:30.694] STEP: Destroying namespace "pods-6943" for this suite.
I0212 00:14:30.694] Feb 12 00:06:45.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 695 lines ...
I0212 00:14:30.763] [BeforeEach] [k8s.io] Security Context
I0212 00:14:30.763]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0212 00:14:30.763] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0212 00:14:30.763]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0212 00:14:30.763] Feb 12 00:07:22.160: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-2b65a912-2e5a-11e9-a2b5-42010a8a0023" in namespace "security-context-test-9239" to be "success or failure"
I0212 00:14:30.764] Feb 12 00:07:22.167: INFO: Pod "busybox-readonly-true-2b65a912-2e5a-11e9-a2b5-42010a8a0023": Phase="Pending", Reason="", readiness=false. Elapsed: 6.802158ms
I0212 00:14:30.764] Feb 12 00:07:24.169: INFO: Pod "busybox-readonly-true-2b65a912-2e5a-11e9-a2b5-42010a8a0023": Phase="Failed", Reason="", readiness=false. Elapsed: 2.008767173s
I0212 00:14:30.764] Feb 12 00:07:24.169: INFO: Pod "busybox-readonly-true-2b65a912-2e5a-11e9-a2b5-42010a8a0023" satisfied condition "success or failure"
I0212 00:14:30.764] [AfterEach] [k8s.io] Security Context
I0212 00:14:30.764]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:14:30.764] Feb 12 00:07:24.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0212 00:14:30.764] STEP: Destroying namespace "security-context-test-9239" for this suite.
I0212 00:14:30.764] Feb 12 00:07:30.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 2333 lines ...
I0212 00:14:31.060]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0212 00:14:31.060] STEP: Creating a kubernetes client
I0212 00:14:31.060] STEP: Building a namespace api object, basename init-container
I0212 00:14:31.060] Feb 12 00:10:45.365: INFO: Skipping waiting for service account
I0212 00:14:31.060] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0212 00:14:31.061]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0212 00:14:31.061] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0212 00:14:31.061]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0212 00:14:31.061] STEP: creating the pod
I0212 00:14:31.061] Feb 12 00:10:45.365: INFO: PodSpec: initContainers in spec.initContainers
I0212 00:14:31.065] Feb 12 00:11:31.714: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a4857047-2e5a-11e9-86f9-42010a8a0023", GenerateName:"", Namespace:"init-container-6018", SelfLink:"/api/v1/namespaces/init-container-6018/pods/pod-init-a4857047-2e5a-11e9-86f9-42010a8a0023", UID:"a4905577-2e5a-11e9-9647-42010a8a0023", ResourceVersion:"3284", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685527045, loc:(*time.Location)(0xa2319e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"365362054"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000d6b5e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-96abaafd-ubuntu-gke-1804-d1703-0-v20181113", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b740c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000d6b660)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000d6b680)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000d6b690), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000d6b694)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685527045, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685527045, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685527045, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685527045, loc:(*time.Location)(0xa2319e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.35", PodIP:"10.100.0.172", StartTime:(*v1.Time)(0xc0010c46a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0000ee770)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0000ee7e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://ee25f7b84d454f41470eab26c0a8013721177e28f94af239cb7c11f035f6f825"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0010c4700), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0010c4740), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0212 00:14:31.065] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0212 00:14:31.065]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0212 00:14:31.065] Feb 12 00:11:31.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0212 00:14:31.065] STEP: Destroying namespace "init-container-6018" for this suite.
I0212 00:14:31.066] Feb 12 00:11:53.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0212 00:14:31.066] Feb 12 00:11:53.778: INFO: namespace init-container-6018 deletion completed in 22.054349635s
I0212 00:14:31.066] 
I0212 00:14:31.066] 
I0212 00:14:31.066] • [SLOW TEST:68.424 seconds]
I0212 00:14:31.066] [k8s.io] InitContainer [NodeConformance]
I0212 00:14:31.066] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0212 00:14:31.066]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0212 00:14:31.066]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0212 00:14:31.066] ------------------------------
I0212 00:14:31.066] [BeforeEach] [k8s.io] Probing container
I0212 00:14:31.067]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0212 00:14:31.067] STEP: Creating a kubernetes client
I0212 00:14:31.067] STEP: Building a namespace api object, basename container-probe
... skipping 82 lines ...
I0212 00:14:31.075] I0212 00:14:23.298563    2694 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0212 00:14:31.075] I0212 00:14:23.318372    2694 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190211T235912.service].
I0212 00:14:31.075] I0212 00:14:23.867302    2694 e2e_node_suite_test.go:190] Tests Finished
I0212 00:14:31.075] 
I0212 00:14:31.075] 
I0212 00:14:31.075] Ran 156 of 286 Specs in 895.191 seconds
I0212 00:14:31.075] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 130 Skipped 
I0212 00:14:31.076] 
I0212 00:14:31.076] Ginkgo ran 1 suite in 14m57.436772083s
I0212 00:14:31.076] Test Suite Passed
I0212 00:14:31.076] 
I0212 00:14:31.076] Success Finished Test Suite on Host tmp-node-e2e-96abaafd-ubuntu-gke-1804-d1703-0-v20181113
I0212 00:14:31.076] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 5 lines ...
W0212 00:14:31.177] 2019/02/12 00:14:31 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml' finished in 18m53.325334649s
W0212 00:14:31.177] 2019/02/12 00:14:31 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0212 00:14:31.178] 2019/02/12 00:14:31 node.go:52: Noop - Node Down()
W0212 00:14:31.178] 2019/02/12 00:14:31 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0212 00:14:31.178] 2019/02/12 00:14:31 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0212 00:14:31.552] 2019/02/12 00:14:31 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 390.25434ms
W0212 00:14:31.552] 2019/02/12 00:14:31 main.go:297: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1]
W0212 00:14:31.554] Traceback (most recent call last):
W0212 00:14:31.555]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0212 00:14:31.555]     main(parse_args())
W0212 00:14:31.555]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0212 00:14:31.555]     mode.start(runner_args)
W0212 00:14:31.555]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0212 00:14:31.555]     check_env(env, self.command, *args)
W0212 00:14:31.555]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0212 00:14:31.555]     subprocess.check_call(cmd, env=env)
W0212 00:14:31.555]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0212 00:14:31.556]     raise CalledProcessError(retcode, cmd)
W0212 00:14:31.556] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-project=k8s-jkns-pr-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]" --skip="\\[Flaky\\]|\\[Slow\\]|\\[Serial\\]" --flakeAttempts=2', '--timeout=65m', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml')' returned non-zero exit status 1
E0212 00:14:31.564] Command failed
I0212 00:14:31.564] process 491 exited with code 1 after 18.9m
E0212 00:14:31.565] FAIL: pull-kubernetes-node-e2e
I0212 00:14:31.565] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0212 00:14:32.096] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0212 00:14:32.153] process 44947 exited with code 0 after 0.0m
I0212 00:14:32.153] Call:  gcloud config get-value account
I0212 00:14:32.452] process 44959 exited with code 0 after 0.0m
I0212 00:14:32.452] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0212 00:14:32.452] Upload result and artifacts...
I0212 00:14:32.453] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73940/pull-kubernetes-node-e2e/119399
I0212 00:14:32.453] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73940/pull-kubernetes-node-e2e/119399/artifacts
W0212 00:14:33.521] CommandException: One or more URLs matched no objects.
E0212 00:14:33.652] Command failed
I0212 00:14:33.653] process 44971 exited with code 1 after 0.0m
W0212 00:14:33.653] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73940/pull-kubernetes-node-e2e/119399/artifacts not exist yet
I0212 00:14:33.653] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73940/pull-kubernetes-node-e2e/119399/artifacts
I0212 00:14:36.521] process 45113 exited with code 0 after 0.0m
I0212 00:14:36.522] Call:  git rev-parse HEAD
I0212 00:14:36.526] process 45756 exited with code 0 after 0.0m
... skipping 20 lines ...