This job view page is being replaced by Spyglass soon. Check out the new job view.
PRixdy: bazel: initial support for cross-compilation
ResultFAILURE
Tests 1 failed / 475 succeeded
Started2019-02-11 22:28
Elapsed21m53s
Revision
Buildergke-prow-containerd-pool-99179761-s4k7
Refs master:805a9e70
73930:1bb8c244
pod35616c79-2e4c-11e9-a65a-0a580a6c0819
infra-commit89e68fa6f
job-versionv1.14.0-alpha.2.544+68694d2dfb3764
pod35616c79-2e4c-11e9-a65a-0a580a6c0819
repok8s.io/kubernetes
repo-commit68694d2dfb3764aec936131def537a92f7c19212
repos{u'k8s.io/kubernetes': u'master:805a9e703698d0a8a86f405f861f9e3fd91b29c6,73930:1bb8c244bf07197254c367c0c1327dd66f7048d0'}
revisionv1.14.0-alpha.2.544+68694d2dfb3764

Test Failures


Node Tests 20m40s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 475 Passed Tests

Show 388 Skipped Tests

Error lines from build-log.txt

... skipping 341 lines ...
W0211 22:33:24.316] I0211 22:33:24.315911    4570 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0211 22:33:24.316] I0211 22:33:24.315953    4570 node_e2e.go:164] Starting tests on "tmp-node-e2e-85027fa4-cos-stable-63-10032-71-0"
W0211 22:33:24.388] I0211 22:33:24.387824    4570 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0211 22:33:24.388] I0211 22:33:24.387875    4570 node_e2e.go:164] Starting tests on "tmp-node-e2e-85027fa4-cos-stable-60-9592-84-0"
W0211 22:33:25.485] I0211 22:33:25.484868    4570 node_e2e.go:164] Starting tests on "tmp-node-e2e-85027fa4-coreos-beta-1883-1-0-v20180911"
W0211 22:33:25.656] I0211 22:33:25.656287    4570 node_e2e.go:164] Starting tests on "tmp-node-e2e-85027fa4-ubuntu-gke-1804-d1703-0-v20181113"
W0211 22:36:08.865] I0211 22:36:08.865257    4570 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0211 22:36:09.563] I0211 22:36:09.563060    4570 remote.go:202] Got the system logs from journald; copying it back...
W0211 22:36:10.564] I0211 22:36:10.564435    4570 remote.go:122] Copying test artifacts from "tmp-node-e2e-85027fa4-cos-stable-60-9592-84-0"
W0211 22:36:12.070] I0211 22:36:12.069925    4570 run_remote.go:717] Deleting instance "tmp-node-e2e-85027fa4-cos-stable-60-9592-84-0"
I0211 22:36:12.702] 
I0211 22:36:12.702] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0211 22:36:12.702] >                              START TEST                                >
... skipping 46 lines ...
I0211 22:36:12.707] Validating docker...
I0211 22:36:12.707] DOCKER_VERSION: 1.13.1
I0211 22:36:12.707] DOCKER_GRAPH_DRIVER: overlay2
I0211 22:36:12.707] PASS
I0211 22:36:12.707] I0211 22:33:29.720520    1280 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0211 22:36:12.708] I0211 22:33:29.720547    1280 image_list.go:131] Pre-pulling images with docker [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.1 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.4.1 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0211 22:36:12.708] W0211 22:34:19.290084    1280 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 22:36:12.708] W0211 22:34:50.486874    1280 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (2 of 5): exit status 1
I0211 22:36:12.708] W0211 22:35:06.520314    1280 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (3 of 5): exit status 1
I0211 22:36:12.708] W0211 22:35:37.557454    1280 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (4 of 5): exit status 1
I0211 22:36:12.709] W0211 22:36:08.770880    1280 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (5 of 5): exit status 1
I0211 22:36:12.709] W0211 22:36:08.770920    1280 image_list.go:148] Could not pre-pull image gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 exit status 1 output: 1.0: Pulling from kubernetes-e2e-test-images/entrypoint-tester
I0211 22:36:12.709] Get https://gcr.io/v2/kubernetes-e2e-test-images/entrypoint-tester/manifests/sha256:ef1e5bf4aa80f899f51d173dfcc3106e8daf4c78c28be135b1d421c97f4c9354: dial tcp 74.125.142.82:443: i/o timeout
I0211 22:36:12.709] 
I0211 22:36:12.709] 
I0211 22:36:12.709] Failure [159.419 seconds]
I0211 22:36:12.709] [BeforeSuite] BeforeSuite 
I0211 22:36:12.709] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.710] 
I0211 22:36:12.710]   Expected error:
I0211 22:36:12.710]       <*exec.ExitError | 0xc0001dc220>: {
I0211 22:36:12.710]           ProcessState: {
I0211 22:36:12.710]               pid: 1427,
I0211 22:36:12.710]               status: 256,
I0211 22:36:12.710]               rusage: {
I0211 22:36:12.710]                   Utime: {Sec: 0, Usec: 6000},
... skipping 22 lines ...
I0211 22:36:12.712]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:151
I0211 22:36:12.712] ------------------------------
I0211 22:36:12.712] Failure [159.429 seconds]
I0211 22:36:12.712] [BeforeSuite] BeforeSuite 
I0211 22:36:12.712] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.712] 
I0211 22:36:12.712]   BeforeSuite on Node 1 failed
I0211 22:36:12.712] 
I0211 22:36:12.712]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.713] ------------------------------
I0211 22:36:12.713] Failure [159.527 seconds]
I0211 22:36:12.713] [BeforeSuite] BeforeSuite 
I0211 22:36:12.713] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.713] 
I0211 22:36:12.713]   BeforeSuite on Node 1 failed
I0211 22:36:12.713] 
I0211 22:36:12.713]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.713] ------------------------------
I0211 22:36:12.713] Failure [159.483 seconds]
I0211 22:36:12.713] [BeforeSuite] BeforeSuite 
I0211 22:36:12.713] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.714] 
I0211 22:36:12.714]   BeforeSuite on Node 1 failed
I0211 22:36:12.714] 
I0211 22:36:12.714]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.714] ------------------------------
I0211 22:36:12.714] Failure [159.411 seconds]
I0211 22:36:12.714] [BeforeSuite] BeforeSuite 
I0211 22:36:12.714] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.714] 
I0211 22:36:12.714]   BeforeSuite on Node 1 failed
I0211 22:36:12.714] 
I0211 22:36:12.714]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.715] ------------------------------
I0211 22:36:12.715] Failure [159.444 seconds]
I0211 22:36:12.715] [BeforeSuite] BeforeSuite 
I0211 22:36:12.715] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.715] 
I0211 22:36:12.715]   BeforeSuite on Node 1 failed
I0211 22:36:12.715] 
I0211 22:36:12.715]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.715] ------------------------------
I0211 22:36:12.715] Failure [159.406 seconds]
I0211 22:36:12.715] [BeforeSuite] BeforeSuite 
I0211 22:36:12.715] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.716] 
I0211 22:36:12.716]   BeforeSuite on Node 1 failed
I0211 22:36:12.716] 
I0211 22:36:12.716]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.716] ------------------------------
I0211 22:36:12.716] Failure [159.460 seconds]
I0211 22:36:12.716] [BeforeSuite] BeforeSuite 
I0211 22:36:12.716] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.716] 
I0211 22:36:12.716]   BeforeSuite on Node 1 failed
I0211 22:36:12.716] 
I0211 22:36:12.716]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:12.717] ------------------------------
I0211 22:36:12.717] I0211 22:36:08.824179    1280 e2e_node_suite_test.go:190] Tests Finished
I0211 22:36:12.717] 
I0211 22:36:12.717] 
I0211 22:36:12.717] Ran 2288 of 0 Specs in 159.587 seconds
I0211 22:36:12.717] FAIL! -- 0 Passed | 2288 Failed | 0 Flaked | 0 Pending | 0 Skipped 
I0211 22:36:12.717] 
I0211 22:36:12.717] Ginkgo ran 1 suite in 2m43.819355077s
I0211 22:36:12.717] Test Suite Failed
I0211 22:36:12.717] 
I0211 22:36:12.717] Failure Finished Test Suite on Host tmp-node-e2e-85027fa4-cos-stable-60-9592-84-0
I0211 22:36:12.718] [command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.203.181.53 -- sudo sh -c 'cd /tmp/node-e2e-20190211T223312 && timeout -k 30s 3900.000000s ./ginkgo --nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 ./e2e_node.test -- --system-spec-name= --system-spec-file= --logtostderr --v 4 --node-name=tmp-node-e2e-85027fa4-cos-stable-60-9592-84-0 --report-dir=/tmp/node-e2e-20190211T223312/results --report-prefix=cos-stable2 --image-description="cos-stable-60-9592-84-0" --kubelet-flags=--experimental-mounter-path=/tmp/node-e2e-20190211T223312/mounter --kubelet-flags=--experimental-kernel-memcg-notification=true --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"'] failed with error: exit status 1, command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r prow@35.203.181.53:/tmp/node-e2e-20190211T223312/results/*.log /workspace/_artifacts/tmp-node-e2e-85027fa4-cos-stable-60-9592-84-0] failed with error: exit status 1]
I0211 22:36:12.718] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0211 22:36:12.718] <                              FINISH TEST                               <
I0211 22:36:12.718] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0211 22:36:12.719] 
W0211 22:46:10.670] I0211 22:46:10.670384    4570 remote.go:122] Copying test artifacts from "tmp-node-e2e-85027fa4-ubuntu-gke-1804-d1703-0-v20181113"
W0211 22:46:16.609] I0211 22:46:16.608795    4570 run_remote.go:717] Deleting instance "tmp-node-e2e-85027fa4-ubuntu-gke-1804-d1703-0-v20181113"
... skipping 49 lines ...
I0211 22:46:17.235] Validating docker...
I0211 22:46:17.235] DOCKER_VERSION: 17.03.2-ce
I0211 22:46:17.235] DOCKER_GRAPH_DRIVER: overlay2
I0211 22:46:17.235] PASS
I0211 22:46:17.236] I0211 22:33:29.191012    2697 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0211 22:46:17.236] I0211 22:33:29.191041    2697 image_list.go:131] Pre-pulling images with docker [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.1 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.4.1 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0211 22:46:17.236] W0211 22:34:04.060894    2697 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 22:46:17.237] I0211 22:35:46.038955    2697 kubelet.go:108] Starting kubelet
I0211 22:46:17.237] I0211 22:35:46.039041    2697 feature_gate.go:226] feature gates: &{map[]}
I0211 22:46:17.237] I0211 22:35:46.042281    2697 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run --unit=kubelet-20190211T223312.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20190211T223312/kubelet --kubeconfig /tmp/node-e2e-20190211T223312/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --allow-privileged=true --dynamic-config-dir /tmp/node-e2e-20190211T223312/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20190211T223312/cni/bin --cni-conf-dir /tmp/node-e2e-20190211T223312/cni/net.d --hostname-override tmp-node-e2e-85027fa4-ubuntu-gke-1804-d1703-0-v20181113 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20190211T223312/kubelet-config --experimental-kernel-memcg-notification=true --cgroups-per-qos=true --cgroup-root=/"
I0211 22:46:17.237] I0211 22:35:46.042339    2697 util.go:44] Running readiness check for service "kubelet"
I0211 22:46:17.237] I0211 22:35:46.042463    2697 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20190211T223312/results/kubelet.log
I0211 22:46:17.238] I0211 22:35:46.063254    2697 server.go:172] Running health check for service "kubelet"
... skipping 1455 lines ...
I0211 22:46:17.389]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0211 22:46:17.389] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0211 22:46:17.389]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0211 22:46:17.389] Feb 11 22:38:16.615: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-b933e595-2e4d-11e9-b91d-42010a8a0050" in namespace "security-context-test-8088" to be "success or failure"
I0211 22:46:17.390] Feb 11 22:38:16.623: INFO: Pod "busybox-readonly-true-b933e595-2e4d-11e9-b91d-42010a8a0050": Phase="Pending", Reason="", readiness=false. Elapsed: 8.308077ms
I0211 22:46:17.390] Feb 11 22:38:18.625: INFO: Pod "busybox-readonly-true-b933e595-2e4d-11e9-b91d-42010a8a0050": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010202413s
I0211 22:46:17.390] Feb 11 22:38:20.627: INFO: Pod "busybox-readonly-true-b933e595-2e4d-11e9-b91d-42010a8a0050": Phase="Failed", Reason="", readiness=false. Elapsed: 4.012074649s
I0211 22:46:17.390] Feb 11 22:38:20.627: INFO: Pod "busybox-readonly-true-b933e595-2e4d-11e9-b91d-42010a8a0050" satisfied condition "success or failure"
I0211 22:46:17.390] [AfterEach] [k8s.io] Security Context
I0211 22:46:17.390]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:46:17.390] Feb 11 22:38:20.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:46:17.390] STEP: Destroying namespace "security-context-test-8088" for this suite.
I0211 22:46:17.391] Feb 11 22:38:26.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 851 lines ...
I0211 22:46:17.503]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:46:17.503] STEP: Creating a kubernetes client
I0211 22:46:17.503] STEP: Building a namespace api object, basename init-container
I0211 22:46:17.503] Feb 11 22:39:28.713: INFO: Skipping waiting for service account
I0211 22:46:17.503] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:46:17.503]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 22:46:17.503] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 22:46:17.504]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:46:17.504] STEP: creating the pod
I0211 22:46:17.504] Feb 11 22:39:28.713: INFO: PodSpec: initContainers in spec.initContainers
I0211 22:46:17.504] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:46:17.504]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:46:17.504] Feb 11 22:39:31.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0211 22:46:17.504] Feb 11 22:39:37.661: INFO: namespace init-container-1844 deletion completed in 6.053252829s
I0211 22:46:17.504] 
I0211 22:46:17.505] 
I0211 22:46:17.505] • [SLOW TEST:8.953 seconds]
I0211 22:46:17.505] [k8s.io] InitContainer [NodeConformance]
I0211 22:46:17.505] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 22:46:17.505]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 22:46:17.505]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:46:17.505] ------------------------------
I0211 22:46:17.505] SSS
I0211 22:46:17.505] ------------------------------
I0211 22:46:17.505] [BeforeEach] [sig-storage] Projected downwardAPI
I0211 22:46:17.505]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
... skipping 1251 lines ...
I0211 22:46:17.635] STEP: verifying the pod is in kubernetes
I0211 22:46:17.635] STEP: updating the pod
I0211 22:46:17.635] Feb 11 22:40:55.345: INFO: Successfully updated pod "pod-update-activedeadlineseconds-16431be1-2e4e-11e9-b91d-42010a8a0050"
I0211 22:46:17.635] Feb 11 22:40:55.345: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-16431be1-2e4e-11e9-b91d-42010a8a0050" in namespace "pods-7630" to be "terminated due to deadline exceeded"
I0211 22:46:17.636] Feb 11 22:40:55.348: INFO: Pod "pod-update-activedeadlineseconds-16431be1-2e4e-11e9-b91d-42010a8a0050": Phase="Running", Reason="", readiness=true. Elapsed: 2.230369ms
I0211 22:46:17.636] Feb 11 22:40:57.360: INFO: Pod "pod-update-activedeadlineseconds-16431be1-2e4e-11e9-b91d-42010a8a0050": Phase="Running", Reason="", readiness=true. Elapsed: 2.014124815s
I0211 22:46:17.636] Feb 11 22:40:59.361: INFO: Pod "pod-update-activedeadlineseconds-16431be1-2e4e-11e9-b91d-42010a8a0050": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.016024879s
I0211 22:46:17.636] Feb 11 22:40:59.361: INFO: Pod "pod-update-activedeadlineseconds-16431be1-2e4e-11e9-b91d-42010a8a0050" satisfied condition "terminated due to deadline exceeded"
I0211 22:46:17.636] [AfterEach] [k8s.io] Pods
I0211 22:46:17.636]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:46:17.636] Feb 11 22:40:59.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:46:17.636] STEP: Destroying namespace "pods-7630" for this suite.
I0211 22:46:17.637] Feb 11 22:41:05.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 328 lines ...
I0211 22:46:17.671]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:46:17.671] STEP: Creating a kubernetes client
I0211 22:46:17.671] STEP: Building a namespace api object, basename init-container
I0211 22:46:17.671] Feb 11 22:40:33.568: INFO: Skipping waiting for service account
I0211 22:46:17.671] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:46:17.671]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 22:46:17.672] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 22:46:17.672]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:46:17.672] STEP: creating the pod
I0211 22:46:17.672] Feb 11 22:40:33.568: INFO: PodSpec: initContainers in spec.initContainers
I0211 22:46:17.676] Feb 11 22:41:19.554: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0ad6d4d6-2e4e-11e9-96c5-42010a8a0050", GenerateName:"", Namespace:"init-container-5429", SelfLink:"/api/v1/namespaces/init-container-5429/pods/pod-init-0ad6d4d6-2e4e-11e9-96c5-42010a8a0050", UID:"0ad795e7-2e4e-11e9-adeb-42010a8a0050", ResourceVersion:"2459", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685521633, loc:(*time.Location)(0xa2319e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"568895372"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0006be3b0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-85027fa4-ubuntu-gke-1804-d1703-0-v20181113", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001330120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006be470)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0006be4a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0006be4e0), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0006be4e4)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521633, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521633, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521633, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521633, loc:(*time.Location)(0xa2319e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.80", PodIP:"10.100.0.117", StartTime:(*v1.Time)(0xc000b7c700), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00179ad90)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00179ae00)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://5f96414fe2df1b40818059ebdb5f9d7b7b9e8de317d3c47be06b39ea7fda6304"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000b7c7a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000b7c7e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0211 22:46:17.676] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:46:17.676]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:46:17.676] Feb 11 22:41:19.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:46:17.677] STEP: Destroying namespace "init-container-5429" for this suite.
I0211 22:46:17.677] Feb 11 22:41:43.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0211 22:46:17.677] Feb 11 22:41:43.681: INFO: namespace init-container-5429 deletion completed in 24.119060424s
I0211 22:46:17.677] 
I0211 22:46:17.677] 
I0211 22:46:17.677] • [SLOW TEST:70.117 seconds]
I0211 22:46:17.677] [k8s.io] InitContainer [NodeConformance]
I0211 22:46:17.677] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 22:46:17.677]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 22:46:17.677]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:46:17.678] ------------------------------
I0211 22:46:17.678] [BeforeEach] [k8s.io] Kubelet
I0211 22:46:17.678]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:46:17.678] STEP: Creating a kubernetes client
I0211 22:46:17.678] STEP: Building a namespace api object, basename kubelet-test
... skipping 288 lines ...
I0211 22:46:17.708] Feb 11 22:36:52.436: INFO: Skipping waiting for service account
I0211 22:46:17.708] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0211 22:46:17.708]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0211 22:46:17.708] STEP: create the container
I0211 22:46:17.708] STEP: check the container status
I0211 22:46:17.708] STEP: delete the container
I0211 22:46:17.708] Feb 11 22:41:52.624: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0211 22:46:17.709] STEP: create the container
I0211 22:46:17.709] STEP: check the container status
I0211 22:46:17.709] STEP: delete the container
I0211 22:46:17.709] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0211 22:46:17.709]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:46:17.709] Feb 11 22:41:55.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 638 lines ...
I0211 22:46:17.788] STEP: Creating a kubernetes client
I0211 22:46:17.788] STEP: Building a namespace api object, basename container-runtime
I0211 22:46:17.788] Feb 11 22:42:40.375: INFO: Skipping waiting for service account
I0211 22:46:17.788] [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
I0211 22:46:17.788]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:205
I0211 22:46:17.788] STEP: create the container
I0211 22:46:17.788] STEP: wait for the container to reach Failed
I0211 22:46:17.788] STEP: get the container status
I0211 22:46:17.789] STEP: the container should be terminated
I0211 22:46:17.789] STEP: the termination message should be set
I0211 22:46:17.789] STEP: delete the container
I0211 22:46:17.789] [AfterEach] [k8s.io] Container Runtime
I0211 22:46:17.789]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
... skipping 294 lines ...
I0211 22:46:17.824] I0211 22:46:09.914107    2697 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0211 22:46:17.824] I0211 22:46:09.936817    2697 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190211T223312.service].
I0211 22:46:17.824] I0211 22:46:10.600372    2697 e2e_node_suite_test.go:190] Tests Finished
I0211 22:46:17.824] 
I0211 22:46:17.824] 
I0211 22:46:17.825] Ran 156 of 286 Specs in 761.869 seconds
I0211 22:46:17.825] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 130 Skipped 
I0211 22:46:17.825] 
I0211 22:46:17.825] Ginkgo ran 1 suite in 12m44.256783426s
I0211 22:46:17.825] Test Suite Passed
I0211 22:46:17.825] 
I0211 22:46:17.825] Success Finished Test Suite on Host tmp-node-e2e-85027fa4-ubuntu-gke-1804-d1703-0-v20181113
I0211 22:46:17.825] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 54 lines ...
I0211 22:46:32.382] Validating docker...
I0211 22:46:32.382] DOCKER_VERSION: 18.06.1-ce
I0211 22:46:32.383] DOCKER_GRAPH_DRIVER: overlay2
I0211 22:46:32.383] PASS
I0211 22:46:32.383] I0211 22:33:28.884576    1293 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0211 22:46:32.384] I0211 22:33:28.884600    1293 image_list.go:131] Pre-pulling images with docker [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.1 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.4.1 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0211 22:46:32.384] W0211 22:33:50.297567    1293 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 22:46:32.384] W0211 22:34:08.612472    1293 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/mounttest:1.0 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 22:46:32.384] I0211 22:37:18.918574    1293 e2e_node_suite_test.go:219] Locksmithd is masked successfully
I0211 22:46:32.384] I0211 22:37:18.918596    1293 kubelet.go:108] Starting kubelet
I0211 22:46:32.384] I0211 22:37:18.918663    1293 feature_gate.go:226] feature gates: &{map[]}
I0211 22:46:32.385] I0211 22:37:18.941950    1293 server.go:102] Starting server "kubelet" with command "/bin/systemd-run --unit=kubelet-20190211T223312.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20190211T223312/kubelet --kubeconfig /tmp/node-e2e-20190211T223312/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --allow-privileged=true --dynamic-config-dir /tmp/node-e2e-20190211T223312/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20190211T223312/cni/bin --cni-conf-dir /tmp/node-e2e-20190211T223312/cni/net.d --hostname-override tmp-node-e2e-85027fa4-coreos-beta-1883-1-0-v20180911 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20190211T223312/kubelet-config --cgroups-per-qos=true --cgroup-root=/"
I0211 22:46:32.385] I0211 22:37:18.941998    1293 util.go:44] Running readiness check for service "kubelet"
I0211 22:46:32.385] I0211 22:37:18.942046    1293 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20190211T223312/results/kubelet.log
... skipping 72 lines ...
I0211 22:46:32.395]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0211 22:46:32.395] Feb 11 22:37:26.410: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-9b348463-2e4d-11e9-9566-42010a8a0053" in namespace "security-context-test-3746" to be "success or failure"
I0211 22:46:32.395] Feb 11 22:37:26.451: INFO: Pod "busybox-readonly-true-9b348463-2e4d-11e9-9566-42010a8a0053": Phase="Pending", Reason="", readiness=false. Elapsed: 40.576005ms
I0211 22:46:32.395] Feb 11 22:37:28.457: INFO: Pod "busybox-readonly-true-9b348463-2e4d-11e9-9566-42010a8a0053": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046763944s
I0211 22:46:32.396] Feb 11 22:37:30.461: INFO: Pod "busybox-readonly-true-9b348463-2e4d-11e9-9566-42010a8a0053": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051032351s
I0211 22:46:32.396] Feb 11 22:37:32.463: INFO: Pod "busybox-readonly-true-9b348463-2e4d-11e9-9566-42010a8a0053": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053005457s
I0211 22:46:32.396] Feb 11 22:37:34.465: INFO: Pod "busybox-readonly-true-9b348463-2e4d-11e9-9566-42010a8a0053": Phase="Failed", Reason="", readiness=false. Elapsed: 8.054683565s
I0211 22:46:32.396] Feb 11 22:37:34.465: INFO: Pod "busybox-readonly-true-9b348463-2e4d-11e9-9566-42010a8a0053" satisfied condition "success or failure"
I0211 22:46:32.396] [AfterEach] [k8s.io] Security Context
I0211 22:46:32.396]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:46:32.397] Feb 11 22:37:34.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:46:32.397] STEP: Destroying namespace "security-context-test-3746" for this suite.
I0211 22:46:32.397] Feb 11 22:37:40.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1091 lines ...
I0211 22:46:32.538] STEP: Creating a kubernetes client
I0211 22:46:32.538] STEP: Building a namespace api object, basename container-runtime
I0211 22:46:32.538] Feb 11 22:38:51.968: INFO: Skipping waiting for service account
I0211 22:46:32.538] [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
I0211 22:46:32.538]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:205
I0211 22:46:32.538] STEP: create the container
I0211 22:46:32.538] STEP: wait for the container to reach Failed
I0211 22:46:32.538] STEP: get the container status
I0211 22:46:32.538] STEP: the container should be terminated
I0211 22:46:32.538] STEP: the termination message should be set
I0211 22:46:32.539] STEP: delete the container
I0211 22:46:32.539] [AfterEach] [k8s.io] Container Runtime
I0211 22:46:32.539]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
... skipping 846 lines ...
I0211 22:46:32.658] STEP: verifying the pod is in kubernetes
I0211 22:46:32.658] STEP: updating the pod
I0211 22:46:32.658] Feb 11 22:40:19.122: INFO: Successfully updated pod "pod-update-activedeadlineseconds-00ba3a96-2e4e-11e9-add0-42010a8a0053"
I0211 22:46:32.658] Feb 11 22:40:19.122: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-00ba3a96-2e4e-11e9-add0-42010a8a0053" in namespace "pods-3908" to be "terminated due to deadline exceeded"
I0211 22:46:32.658] Feb 11 22:40:19.123: INFO: Pod "pod-update-activedeadlineseconds-00ba3a96-2e4e-11e9-add0-42010a8a0053": Phase="Running", Reason="", readiness=true. Elapsed: 1.643019ms
I0211 22:46:32.659] Feb 11 22:40:21.125: INFO: Pod "pod-update-activedeadlineseconds-00ba3a96-2e4e-11e9-add0-42010a8a0053": Phase="Running", Reason="", readiness=true. Elapsed: 2.003605899s
I0211 22:46:32.659] Feb 11 22:40:23.128: INFO: Pod "pod-update-activedeadlineseconds-00ba3a96-2e4e-11e9-add0-42010a8a0053": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.00599151s
I0211 22:46:32.659] Feb 11 22:40:23.128: INFO: Pod "pod-update-activedeadlineseconds-00ba3a96-2e4e-11e9-add0-42010a8a0053" satisfied condition "terminated due to deadline exceeded"
I0211 22:46:32.659] [AfterEach] [k8s.io] Pods
I0211 22:46:32.659]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:46:32.659] Feb 11 22:40:23.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:46:32.659] STEP: Destroying namespace "pods-3908" for this suite.
I0211 22:46:32.660] Feb 11 22:40:29.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1603 lines ...
I0211 22:46:32.871]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:46:32.871] STEP: Creating a kubernetes client
I0211 22:46:32.871] STEP: Building a namespace api object, basename init-container
I0211 22:46:32.871] Feb 11 22:42:45.369: INFO: Skipping waiting for service account
I0211 22:46:32.872] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:46:32.872]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 22:46:32.872] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 22:46:32.872]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:46:32.872] STEP: creating the pod
I0211 22:46:32.872] Feb 11 22:42:45.369: INFO: PodSpec: initContainers in spec.initContainers
I0211 22:46:32.872] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:46:32.872]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:46:32.872] Feb 11 22:42:48.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0211 22:46:32.873] Feb 11 22:42:54.459: INFO: namespace init-container-4746 deletion completed in 6.141225598s
I0211 22:46:32.873] 
I0211 22:46:32.873] 
I0211 22:46:32.873] • [SLOW TEST:9.093 seconds]
I0211 22:46:32.873] [k8s.io] InitContainer [NodeConformance]
I0211 22:46:32.873] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 22:46:32.873]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 22:46:32.873]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:46:32.873] ------------------------------
I0211 22:46:32.874] S
I0211 22:46:32.874] ------------------------------
I0211 22:46:32.874] [BeforeEach] [k8s.io] Container Lifecycle Hook
I0211 22:46:32.874]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
... skipping 540 lines ...
I0211 22:46:32.929]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:46:32.929] STEP: Creating a kubernetes client
I0211 22:46:32.930] STEP: Building a namespace api object, basename init-container
I0211 22:46:32.930] Feb 11 22:42:52.146: INFO: Skipping waiting for service account
I0211 22:46:32.930] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:46:32.930]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 22:46:32.930] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 22:46:32.930]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:46:32.930] STEP: creating the pod
I0211 22:46:32.930] Feb 11 22:42:52.146: INFO: PodSpec: initContainers in spec.initContainers
I0211 22:46:32.934] Feb 11 22:43:34.964: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5d701be6-2e4e-11e9-8d67-42010a8a0053", GenerateName:"", Namespace:"init-container-1960", SelfLink:"/api/v1/namespaces/init-container-1960/pods/pod-init-5d701be6-2e4e-11e9-8d67-42010a8a0053", UID:"5d782317-2e4e-11e9-9b80-42010a8a0053", ResourceVersion:"2672", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685521772, loc:(*time.Location)(0xa2319e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"146586248"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ac70a0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-85027fa4-coreos-beta-1883-1-0-v20180911", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000c58de0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ac7120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ac7160)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000ac7170), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000ac7174)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521772, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521772, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521772, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521772, loc:(*time.Location)(0xa2319e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.83", PodIP:"10.100.0.139", StartTime:(*v1.Time)(0xc00099f940), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a27490)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a27500)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://8d65fd8ac687f5f7394b30a0e6aa63ae0c5a89a7e1808868b17dd9cf4b8271cc"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00099f9e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00099fa40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0211 22:46:32.935] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:46:32.935]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:46:32.935] Feb 11 22:43:34.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:46:32.935] STEP: Destroying namespace "init-container-1960" for this suite.
I0211 22:46:32.935] Feb 11 22:44:00.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0211 22:46:32.935] Feb 11 22:44:01.101: INFO: namespace init-container-1960 deletion completed in 26.125960528s
I0211 22:46:32.935] 
I0211 22:46:32.935] 
I0211 22:46:32.935] • [SLOW TEST:68.958 seconds]
I0211 22:46:32.935] [k8s.io] InitContainer [NodeConformance]
I0211 22:46:32.936] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 22:46:32.936]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 22:46:32.936]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:46:32.936] ------------------------------
I0211 22:46:32.936] S
I0211 22:46:32.936] ------------------------------
I0211 22:46:32.936] [BeforeEach] [sig-storage] Downward API volume
I0211 22:46:32.936]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
... skipping 938 lines ...
I0211 22:46:33.045] I0211 22:46:24.630616    1293 server.go:258] Kill server "services"
I0211 22:46:33.045] I0211 22:46:24.630628    1293 server.go:295] Killing process 2041 (services) with -TERM
I0211 22:46:33.045] I0211 22:46:24.768730    1293 server.go:258] Kill server "kubelet"
I0211 22:46:33.045] I0211 22:46:24.777161    1293 services.go:146] Fetching log files...
I0211 22:46:33.045] I0211 22:46:24.777406    1293 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0211 22:46:33.046] I0211 22:46:24.966960    1293 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0211 22:46:33.046] E0211 22:46:24.971587    1293 services.go:158] failed to get "cloud-init.log" from journald: Failed to add filter for units: No data available
I0211 22:46:33.046] , exit status 1
I0211 22:46:33.046] I0211 22:46:24.971627    1293 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0211 22:46:33.046] I0211 22:46:24.982746    1293 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190211T223312.service].
I0211 22:46:33.046] I0211 22:46:24.998993    1293 e2e_node_suite_test.go:190] Tests Finished
I0211 22:46:33.047] 
I0211 22:46:33.047] 
I0211 22:46:33.047] Ran 156 of 284 Specs in 776.469 seconds
I0211 22:46:33.047] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 128 Skipped 
I0211 22:46:33.047] 
I0211 22:46:33.047] Ginkgo ran 1 suite in 12m58.908756429s
I0211 22:46:33.047] Test Suite Passed
I0211 22:46:33.047] 
I0211 22:46:33.048] Success Finished Test Suite on Host tmp-node-e2e-85027fa4-coreos-beta-1883-1-0-v20180911
I0211 22:46:33.048] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 54 lines ...
I0211 22:49:51.954] Validating docker...
I0211 22:49:51.954] DOCKER_VERSION: 17.03.2-ce
I0211 22:49:51.954] DOCKER_GRAPH_DRIVER: overlay2
I0211 22:49:51.954] PASS
I0211 22:49:51.955] I0211 22:33:30.064773    1306 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0211 22:49:51.956] I0211 22:33:30.064807    1306 image_list.go:131] Pre-pulling images with docker [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.1 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.4.1 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0211 22:49:51.956] W0211 22:34:32.394810    1306 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 22:49:51.956] W0211 22:35:03.773806    1306 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (2 of 5): exit status 1
I0211 22:49:51.956] W0211 22:35:35.146162    1306 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (3 of 5): exit status 1
I0211 22:49:51.956] W0211 22:35:51.188324    1306 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (4 of 5): exit status 1
I0211 22:49:51.957] I0211 22:37:29.683887    1306 kubelet.go:108] Starting kubelet
I0211 22:49:51.957] I0211 22:37:29.683968    1306 feature_gate.go:226] feature gates: &{map[]}
I0211 22:49:51.957] I0211 22:37:29.686137    1306 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run --unit=kubelet-20190211T223312.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20190211T223312/kubelet --kubeconfig /tmp/node-e2e-20190211T223312/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --allow-privileged=true --dynamic-config-dir /tmp/node-e2e-20190211T223312/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20190211T223312/cni/bin --cni-conf-dir /tmp/node-e2e-20190211T223312/cni/net.d --hostname-override tmp-node-e2e-85027fa4-cos-stable-63-10032-71-0 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20190211T223312/kubelet-config --experimental-mounter-path=/tmp/node-e2e-20190211T223312/mounter --experimental-kernel-memcg-notification=true --cgroups-per-qos=true --cgroup-root=/"
I0211 22:49:51.958] I0211 22:37:29.686173    1306 util.go:44] Running readiness check for service "kubelet"
I0211 22:49:51.958] I0211 22:37:29.686234    1306 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20190211T223312/results/kubelet.log
I0211 22:49:51.958] I0211 22:37:29.720753    1306 server.go:172] Running health check for service "kubelet"
... skipping 123 lines ...
I0211 22:49:51.978] STEP: verifying the pod is in kubernetes
I0211 22:49:51.978] STEP: updating the pod
I0211 22:49:51.978] Feb 11 22:37:44.024: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a1b33472-2e4d-11e9-8dc8-42010a8a0052"
I0211 22:49:51.978] Feb 11 22:37:44.024: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a1b33472-2e4d-11e9-8dc8-42010a8a0052" in namespace "pods-345" to be "terminated due to deadline exceeded"
I0211 22:49:51.978] Feb 11 22:37:44.026: INFO: Pod "pod-update-activedeadlineseconds-a1b33472-2e4d-11e9-8dc8-42010a8a0052": Phase="Running", Reason="", readiness=true. Elapsed: 1.960165ms
I0211 22:49:51.978] Feb 11 22:37:46.042: INFO: Pod "pod-update-activedeadlineseconds-a1b33472-2e4d-11e9-8dc8-42010a8a0052": Phase="Running", Reason="", readiness=true. Elapsed: 2.017951165s
I0211 22:49:51.979] Feb 11 22:37:48.044: INFO: Pod "pod-update-activedeadlineseconds-a1b33472-2e4d-11e9-8dc8-42010a8a0052": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.019849037s
I0211 22:49:51.979] Feb 11 22:37:48.044: INFO: Pod "pod-update-activedeadlineseconds-a1b33472-2e4d-11e9-8dc8-42010a8a0052" satisfied condition "terminated due to deadline exceeded"
I0211 22:49:51.979] [AfterEach] [k8s.io] Pods
I0211 22:49:51.979]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:49:51.979] Feb 11 22:37:48.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:49:51.980] STEP: Destroying namespace "pods-345" for this suite.
I0211 22:49:51.980] Feb 11 22:37:54.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 811 lines ...
I0211 22:49:52.099] [BeforeEach] [k8s.io] Security Context
I0211 22:49:52.099]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0211 22:49:52.099] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0211 22:49:52.100]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0211 22:49:52.100] Feb 11 22:38:50.000: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-cd1a9265-2e4d-11e9-a971-42010a8a0052" in namespace "security-context-test-6484" to be "success or failure"
I0211 22:49:52.100] Feb 11 22:38:50.002: INFO: Pod "busybox-readonly-true-cd1a9265-2e4d-11e9-a971-42010a8a0052": Phase="Pending", Reason="", readiness=false. Elapsed: 1.790402ms
I0211 22:49:52.100] Feb 11 22:38:52.004: INFO: Pod "busybox-readonly-true-cd1a9265-2e4d-11e9-a971-42010a8a0052": Phase="Failed", Reason="", readiness=false. Elapsed: 2.003906286s
I0211 22:49:52.100] Feb 11 22:38:52.004: INFO: Pod "busybox-readonly-true-cd1a9265-2e4d-11e9-a971-42010a8a0052" satisfied condition "success or failure"
I0211 22:49:52.100] [AfterEach] [k8s.io] Security Context
I0211 22:49:52.101]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:49:52.101] Feb 11 22:38:52.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:49:52.101] STEP: Destroying namespace "security-context-test-6484" for this suite.
I0211 22:49:52.102] Feb 11 22:38:58.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 551 lines ...
I0211 22:49:52.171] STEP: Creating a kubernetes client
I0211 22:49:52.171] STEP: Building a namespace api object, basename container-runtime
I0211 22:49:52.171] Feb 11 22:39:31.320: INFO: Skipping waiting for service account
I0211 22:49:52.171] [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
I0211 22:49:52.171]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:205
I0211 22:49:52.171] STEP: create the container
I0211 22:49:52.172] STEP: wait for the container to reach Failed
I0211 22:49:52.172] STEP: get the container status
I0211 22:49:52.172] STEP: the container should be terminated
I0211 22:49:52.172] STEP: the termination message should be set
I0211 22:49:52.172] STEP: delete the container
I0211 22:49:52.172] [AfterEach] [k8s.io] Container Runtime
I0211 22:49:52.172]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
... skipping 2515 lines ...
I0211 22:49:52.467]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:49:52.467] STEP: Creating a kubernetes client
I0211 22:49:52.467] STEP: Building a namespace api object, basename init-container
I0211 22:49:52.467] Feb 11 22:42:44.692: INFO: Skipping waiting for service account
I0211 22:49:52.467] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:49:52.467]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 22:49:52.468] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 22:49:52.468]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:49:52.468] STEP: creating the pod
I0211 22:49:52.468] Feb 11 22:42:44.692: INFO: PodSpec: initContainers in spec.initContainers
I0211 22:49:52.472] Feb 11 22:43:23.630: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-58feae0c-2e4e-11e9-a424-42010a8a0052", GenerateName:"", Namespace:"init-container-4625", SelfLink:"/api/v1/namespaces/init-container-4625/pods/pod-init-58feae0c-2e4e-11e9-a424-42010a8a0052", UID:"59069aa1-2e4e-11e9-a114-42010a8a0052", ResourceVersion:"2580", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685521764, loc:(*time.Location)(0xa2319e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"692331797"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000eed3e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-85027fa4-cos-stable-63-10032-71-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00088d080), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000eed450)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000eed470)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000eed480), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000eed484)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521764, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521764, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521764, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521764, loc:(*time.Location)(0xa2319e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.82", PodIP:"10.100.0.131", StartTime:(*v1.Time)(0xc000849e80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001509dc0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001509e30)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://9a6ed58317345f5f7e47cd4c0cf7b72b3f4a34858a5f2a367ac17eecb4c1ca74"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000849ee0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000849f20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0211 22:49:52.472] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:49:52.472]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:49:52.472] Feb 11 22:43:23.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:49:52.473] STEP: Destroying namespace "init-container-4625" for this suite.
I0211 22:49:52.473] Feb 11 22:43:45.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0211 22:49:52.473] Feb 11 22:43:45.702: INFO: namespace init-container-4625 deletion completed in 22.06098348s
I0211 22:49:52.473] 
I0211 22:49:52.473] 
I0211 22:49:52.473] • [SLOW TEST:61.014 seconds]
I0211 22:49:52.473] [k8s.io] InitContainer [NodeConformance]
I0211 22:49:52.473] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 22:49:52.473]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 22:49:52.473]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:49:52.473] ------------------------------
I0211 22:49:52.474] [BeforeEach] [sig-storage] EmptyDir volumes
I0211 22:49:52.474]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:49:52.474] STEP: Creating a kubernetes client
I0211 22:49:52.474] STEP: Building a namespace api object, basename emptydir
... skipping 193 lines ...
I0211 22:49:52.494]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:49:52.494] STEP: Creating a kubernetes client
I0211 22:49:52.494] STEP: Building a namespace api object, basename init-container
I0211 22:49:52.494] Feb 11 22:43:55.390: INFO: Skipping waiting for service account
I0211 22:49:52.494] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:49:52.494]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 22:49:52.495] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 22:49:52.495]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:49:52.495] STEP: creating the pod
I0211 22:49:52.495] Feb 11 22:43:55.390: INFO: PodSpec: initContainers in spec.initContainers
I0211 22:49:52.496] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:49:52.496]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:49:52.497] Feb 11 22:43:58.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0211 22:49:52.497] Feb 11 22:44:04.457: INFO: namespace init-container-4023 deletion completed in 6.189431619s
I0211 22:49:52.497] 
I0211 22:49:52.497] 
I0211 22:49:52.497] • [SLOW TEST:9.070 seconds]
I0211 22:49:52.497] [k8s.io] InitContainer [NodeConformance]
I0211 22:49:52.497] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 22:49:52.498]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 22:49:52.498]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:49:52.498] ------------------------------
I0211 22:49:52.498] [BeforeEach] [k8s.io] Variable Expansion
I0211 22:49:52.498]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:49:52.498] STEP: Creating a kubernetes client
I0211 22:49:52.498] STEP: Building a namespace api object, basename var-expansion
... skipping 762 lines ...
I0211 22:49:52.580] Feb 11 22:40:31.476: INFO: Skipping waiting for service account
I0211 22:49:52.581] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0211 22:49:52.581]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0211 22:49:52.581] STEP: create the container
I0211 22:49:52.581] STEP: check the container status
I0211 22:49:52.581] STEP: delete the container
I0211 22:49:52.581] Feb 11 22:45:31.875: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0211 22:49:52.581] STEP: create the container
I0211 22:49:52.581] STEP: check the container status
I0211 22:49:52.581] STEP: delete the container
I0211 22:49:52.581] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0211 22:49:52.581]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:49:52.582] Feb 11 22:45:34.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 75 lines ...
I0211 22:49:52.589] Feb 11 22:44:34.690: INFO: Skipping waiting for service account
I0211 22:49:52.589] [It] should not be able to pull from private registry without secret [NodeConformance]
I0211 22:49:52.589]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:302
I0211 22:49:52.590] STEP: create the container
I0211 22:49:52.590] STEP: check the container status
I0211 22:49:52.590] STEP: delete the container
I0211 22:49:52.590] Feb 11 22:49:35.387: INFO: No.1 attempt failed: expected container state: Waiting, got: "Running", retrying...
I0211 22:49:52.590] STEP: create the container
I0211 22:49:52.590] STEP: check the container status
I0211 22:49:52.590] STEP: delete the container
I0211 22:49:52.590] [AfterEach] [k8s.io] Container Runtime
I0211 22:49:52.590]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:49:52.591] Feb 11 22:49:37.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
I0211 22:49:52.592]       should not be able to pull from private registry without secret [NodeConformance]
I0211 22:49:52.592]       /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:302
I0211 22:49:52.592] ------------------------------
I0211 22:49:52.592] I0211 22:49:43.534428    1306 e2e_node_suite_test.go:185] Stopping node services...
I0211 22:49:52.592] I0211 22:49:43.534455    1306 server.go:258] Kill server "services"
I0211 22:49:52.592] I0211 22:49:43.534469    1306 server.go:295] Killing process 1838 (services) with -TERM
I0211 22:49:52.592] E0211 22:49:43.706670    1306 services.go:89] Failed to stop services: error stopping "services": waitid: no child processes
I0211 22:49:52.593] I0211 22:49:43.706710    1306 server.go:258] Kill server "kubelet"
I0211 22:49:52.593] I0211 22:49:43.716557    1306 services.go:146] Fetching log files...
I0211 22:49:52.593] I0211 22:49:43.716648    1306 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0211 22:49:52.593] I0211 22:49:43.844933    1306 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0211 22:49:52.593] I0211 22:49:44.443571    1306 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0211 22:49:52.593] I0211 22:49:44.476731    1306 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190211T223312.service].
I0211 22:49:52.593] I0211 22:49:45.481413    1306 e2e_node_suite_test.go:190] Tests Finished
I0211 22:49:52.593] 
I0211 22:49:52.593] 
I0211 22:49:52.593] Ran 156 of 286 Specs in 975.890 seconds
I0211 22:49:52.594] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 130 Skipped 
I0211 22:49:52.594] 
I0211 22:49:52.594] Ginkgo ran 1 suite in 16m20.542829231s
I0211 22:49:52.594] Test Suite Passed
I0211 22:49:52.594] 
I0211 22:49:52.594] Success Finished Test Suite on Host tmp-node-e2e-85027fa4-cos-stable-63-10032-71-0
I0211 22:49:52.594] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 5 lines ...
W0211 22:49:52.724] 2019/02/11 22:49:52 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml' finished in 20m40.236759143s
W0211 22:49:52.724] 2019/02/11 22:49:52 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0211 22:49:52.724] 2019/02/11 22:49:52 node.go:52: Noop - Node Down()
W0211 22:49:52.725] 2019/02/11 22:49:52 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0211 22:49:52.725] 2019/02/11 22:49:52 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0211 22:49:53.139] 2019/02/11 22:49:53 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 414.51315ms
W0211 22:49:53.140] 2019/02/11 22:49:53 main.go:297: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1]
W0211 22:49:53.143] Traceback (most recent call last):
W0211 22:49:53.144]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0211 22:49:53.144]     main(parse_args())
W0211 22:49:53.144]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0211 22:49:53.144]     mode.start(runner_args)
W0211 22:49:53.144]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0211 22:49:53.145]     check_env(env, self.command, *args)
W0211 22:49:53.145]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0211 22:49:53.145]     subprocess.check_call(cmd, env=env)
W0211 22:49:53.145]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0211 22:49:53.145]     raise CalledProcessError(retcode, cmd)
W0211 22:49:53.146] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-project=k8s-jkns-pr-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]" --skip="\\[Flaky\\]|\\[Slow\\]|\\[Serial\\]" --flakeAttempts=2', '--timeout=65m', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml')' returned non-zero exit status 1
E0211 22:49:53.156] Command failed
I0211 22:49:53.156] process 492 exited with code 1 after 20.7m
E0211 22:49:53.157] FAIL: pull-kubernetes-node-e2e
I0211 22:49:53.158] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0211 22:49:53.747] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0211 22:49:53.808] process 44975 exited with code 0 after 0.0m
I0211 22:49:53.809] Call:  gcloud config get-value account
I0211 22:49:54.155] process 44987 exited with code 0 after 0.0m
I0211 22:49:54.156] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0211 22:49:54.156] Upload result and artifacts...
I0211 22:49:54.156] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73930/pull-kubernetes-node-e2e/119386
I0211 22:49:54.156] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73930/pull-kubernetes-node-e2e/119386/artifacts
W0211 22:49:55.242] CommandException: One or more URLs matched no objects.
E0211 22:49:55.368] Command failed
I0211 22:49:55.368] process 44999 exited with code 1 after 0.0m
W0211 22:49:55.368] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73930/pull-kubernetes-node-e2e/119386/artifacts not exist yet
I0211 22:49:55.368] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73930/pull-kubernetes-node-e2e/119386/artifacts
I0211 22:49:58.188] process 45141 exited with code 0 after 0.0m
I0211 22:49:58.189] Call:  git rev-parse HEAD
I0211 22:49:58.193] process 45784 exited with code 0 after 0.0m
... skipping 21 lines ...