This job view page is being replaced by Spyglass soon. Check out the new job view.
PRLiujingfang1: add kustomize as a subcommand in kubectl
ResultFAILURE
Tests 1 failed / 475 succeeded
Started2019-02-11 22:28
Elapsed22m59s
Revision
Buildergke-prow-containerd-pool-99179761-mhfv
Refs master:805a9e70
73033:8bda5322
pod3822ca1c-2e4c-11e9-bb21-0a580a6c061e
infra-commit89e68fa6f
job-versionv1.14.0-alpha.2.541+988dbd57872f37
pod3822ca1c-2e4c-11e9-bb21-0a580a6c061e
repok8s.io/kubernetes
repo-commit988dbd57872f37dab6a77436f79ff25538907664
repos{u'k8s.io/kubernetes': u'master:805a9e703698d0a8a86f405f861f9e3fd91b29c6,73033:8bda5322125f3544505ff346f0b288c266f1d3e2'}
revisionv1.14.0-alpha.2.541+988dbd57872f37

Test Failures


Node Tests 21m46s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 475 Passed Tests

Show 388 Skipped Tests

Error lines from build-log.txt

... skipping 106 lines ...
I0211 22:29:27.115]  vendor/sigs.k8s.io/kustomize/pkg/ifc/ifc.go        |   73 +
I0211 22:29:27.115]  .../kustomize/pkg/ifc/transformer/BUILD            |   27 +
I0211 22:29:27.115]  .../kustomize/pkg/ifc/transformer/factory.go       |   29 +
I0211 22:29:27.115]  vendor/sigs.k8s.io/kustomize/pkg/image/BUILD       |   26 +
I0211 22:29:27.115]  .../kustomize/pkg/image/deprecatedimage.go         |   32 +
I0211 22:29:27.115]  vendor/sigs.k8s.io/kustomize/pkg/image/image.go    |   36 +
I0211 22:29:27.115]  .../sigs.k8s.io/kustomize/pkg/internal/error/BUILD |   30 +
I0211 22:29:27.115]  .../kustomize/pkg/internal/error/configmaperror.go |   30 +
I0211 22:29:27.115]  .../pkg/internal/error/kustomizationerror.go       |   61 +
I0211 22:29:27.115]  .../kustomize/pkg/internal/error/patcherror.go     |   32 +
I0211 22:29:27.115]  .../kustomize/pkg/internal/error/resourceerror.go  |   30 +
I0211 22:29:27.116]  .../kustomize/pkg/internal/error/secreterror.go    |   30 +
I0211 22:29:27.116]  .../pkg/internal/error/yamlformaterror.go          |   48 +
I0211 22:29:27.116]  vendor/sigs.k8s.io/kustomize/pkg/loader/BUILD      |   31 +
I0211 22:29:27.116]  .../sigs.k8s.io/kustomize/pkg/loader/fileloader.go |  312 ++
I0211 22:29:27.116]  vendor/sigs.k8s.io/kustomize/pkg/loader/loader.go  |   39 +
I0211 22:29:27.116]  vendor/sigs.k8s.io/kustomize/pkg/patch/BUILD       |   30 +
I0211 22:29:27.116]  vendor/sigs.k8s.io/kustomize/pkg/patch/json6902.go |   40 +
I0211 22:29:27.117]  .../kustomize/pkg/patch/strategicmerge.go          |   40 +
... skipping 108 lines ...
I0211 22:29:27.129]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/ifc/ifc.go
I0211 22:29:27.129]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/ifc/transformer/BUILD
I0211 22:29:27.129]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/ifc/transformer/factory.go
I0211 22:29:27.129]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/image/BUILD
I0211 22:29:27.129]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/image/deprecatedimage.go
I0211 22:29:27.130]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/image/image.go
I0211 22:29:27.130]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/internal/error/BUILD
I0211 22:29:27.130]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/internal/error/configmaperror.go
I0211 22:29:27.130]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/internal/error/kustomizationerror.go
I0211 22:29:27.130]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/internal/error/patcherror.go
I0211 22:29:27.130]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/internal/error/resourceerror.go
I0211 22:29:27.130]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/internal/error/secreterror.go
I0211 22:29:27.130]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/internal/error/yamlformaterror.go
I0211 22:29:27.130]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/loader/BUILD
I0211 22:29:27.131]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/loader/fileloader.go
I0211 22:29:27.131]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/loader/loader.go
I0211 22:29:27.131]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/patch/BUILD
I0211 22:29:27.131]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/patch/json6902.go
I0211 22:29:27.131]  create mode 100644 vendor/sigs.k8s.io/kustomize/pkg/patch/strategicmerge.go
... skipping 318 lines ...
W0211 22:33:36.457] I0211 22:33:36.457604    4549 utils.go:117] Killing any existing node processes on "tmp-node-e2e-91a835cb-ubuntu-gke-1804-d1703-0-v20181113"
W0211 22:33:36.863] I0211 22:33:36.863215    4549 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0211 22:33:36.863] I0211 22:33:36.863264    4549 node_e2e.go:164] Starting tests on "tmp-node-e2e-91a835cb-cos-stable-63-10032-71-0"
W0211 22:33:36.918] I0211 22:33:36.918376    4549 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0211 22:33:36.919] I0211 22:33:36.918434    4549 node_e2e.go:164] Starting tests on "tmp-node-e2e-91a835cb-cos-stable-60-9592-84-0"
W0211 22:33:37.876] I0211 22:33:37.876136    4549 node_e2e.go:164] Starting tests on "tmp-node-e2e-91a835cb-ubuntu-gke-1804-d1703-0-v20181113"
W0211 22:36:04.396] I0211 22:36:04.396111    4549 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0211 22:36:05.096] I0211 22:36:05.095736    4549 remote.go:202] Got the system logs from journald; copying it back...
W0211 22:36:06.058] I0211 22:36:06.058605    4549 remote.go:122] Copying test artifacts from "tmp-node-e2e-91a835cb-cos-stable-60-9592-84-0"
W0211 22:36:07.572] I0211 22:36:07.572276    4549 run_remote.go:717] Deleting instance "tmp-node-e2e-91a835cb-cos-stable-60-9592-84-0"
I0211 22:36:08.241] 
I0211 22:36:08.241] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0211 22:36:08.242] >                              START TEST                                >
... skipping 46 lines ...
I0211 22:36:08.246] Validating docker...
I0211 22:36:08.246] DOCKER_VERSION: 1.13.1
I0211 22:36:08.246] DOCKER_GRAPH_DRIVER: overlay2
I0211 22:36:08.246] PASS
I0211 22:36:08.246] I0211 22:33:41.647609    1283 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0211 22:36:08.247] I0211 22:33:41.647633    1283 image_list.go:131] Pre-pulling images with docker [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.1 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.4.1 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0211 22:36:08.247] W0211 22:34:29.752686    1283 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 22:36:08.247] W0211 22:34:45.786555    1283 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (2 of 5): exit status 1
I0211 22:36:08.247] W0211 22:35:17.084391    1283 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (3 of 5): exit status 1
I0211 22:36:08.248] W0211 22:35:48.270784    1283 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (4 of 5): exit status 1
I0211 22:36:08.248] W0211 22:36:04.303351    1283 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (5 of 5): exit status 1
I0211 22:36:08.248] W0211 22:36:04.303404    1283 image_list.go:148] Could not pre-pull image gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 exit status 1 output: Error response from daemon: Get https://gcr.io/v2/kubernetes-e2e-test-images/entrypoint-tester/manifests/1.0: Get https://gcr.io/v2/token?scope=repository%3Akubernetes-e2e-test-images%2Fentrypoint-tester%3Apull&service=gcr.io: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0211 22:36:08.248] 
I0211 22:36:08.248] 
I0211 22:36:08.248] Failure [143.030 seconds]
I0211 22:36:08.248] [BeforeSuite] BeforeSuite 
I0211 22:36:08.249] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.249] 
I0211 22:36:08.249]   Expected error:
I0211 22:36:08.249]       <*exec.ExitError | 0xc0009bc140>: {
I0211 22:36:08.249]           ProcessState: {
I0211 22:36:08.249]               pid: 1435,
I0211 22:36:08.249]               status: 256,
I0211 22:36:08.249]               rusage: {
I0211 22:36:08.249]                   Utime: {Sec: 0, Usec: 7000},
... skipping 22 lines ...
I0211 22:36:08.251]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:151
I0211 22:36:08.251] ------------------------------
I0211 22:36:08.251] Failure [143.038 seconds]
I0211 22:36:08.251] [BeforeSuite] BeforeSuite 
I0211 22:36:08.251] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.251] 
I0211 22:36:08.251]   BeforeSuite on Node 1 failed
I0211 22:36:08.251] 
I0211 22:36:08.252]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.252] ------------------------------
I0211 22:36:08.252] Failure [143.022 seconds]
I0211 22:36:08.252] [BeforeSuite] BeforeSuite 
I0211 22:36:08.252] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.252] 
I0211 22:36:08.252]   BeforeSuite on Node 1 failed
I0211 22:36:08.252] 
I0211 22:36:08.252]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.252] ------------------------------
I0211 22:36:08.252] Failure [143.043 seconds]
I0211 22:36:08.252] [BeforeSuite] BeforeSuite 
I0211 22:36:08.253] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.253] 
I0211 22:36:08.253]   BeforeSuite on Node 1 failed
I0211 22:36:08.253] 
I0211 22:36:08.253]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.253] ------------------------------
I0211 22:36:08.253] Failure [143.082 seconds]
I0211 22:36:08.253] [BeforeSuite] BeforeSuite 
I0211 22:36:08.253] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.253] 
I0211 22:36:08.253]   BeforeSuite on Node 1 failed
I0211 22:36:08.253] 
I0211 22:36:08.253]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.254] ------------------------------
I0211 22:36:08.254] Failure [143.076 seconds]
I0211 22:36:08.254] [BeforeSuite] BeforeSuite 
I0211 22:36:08.254] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.254] 
I0211 22:36:08.254]   BeforeSuite on Node 1 failed
I0211 22:36:08.254] 
I0211 22:36:08.254]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.254] ------------------------------
I0211 22:36:08.254] Failure [143.062 seconds]
I0211 22:36:08.254] [BeforeSuite] BeforeSuite 
I0211 22:36:08.255] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.255] 
I0211 22:36:08.255]   BeforeSuite on Node 1 failed
I0211 22:36:08.255] 
I0211 22:36:08.255]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.255] ------------------------------
I0211 22:36:08.255] Failure [143.019 seconds]
I0211 22:36:08.255] [BeforeSuite] BeforeSuite 
I0211 22:36:08.255] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.255] 
I0211 22:36:08.255]   BeforeSuite on Node 1 failed
I0211 22:36:08.255] 
I0211 22:36:08.255]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0211 22:36:08.256] ------------------------------
I0211 22:36:08.256] I0211 22:36:04.356807    1283 e2e_node_suite_test.go:190] Tests Finished
I0211 22:36:08.256] 
I0211 22:36:08.256] 
I0211 22:36:08.256] Ran 2288 of 0 Specs in 143.116 seconds
I0211 22:36:08.256] FAIL! -- 0 Passed | 2288 Failed | 0 Flaked | 0 Pending | 0 Skipped 
I0211 22:36:08.256] 
I0211 22:36:08.256] Ginkgo ran 1 suite in 2m26.828003075s
I0211 22:36:08.256] Test Suite Failed
I0211 22:36:08.256] 
I0211 22:36:08.256] Failure Finished Test Suite on Host tmp-node-e2e-91a835cb-cos-stable-60-9592-84-0
I0211 22:36:08.257] [command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.227.184.191 -- sudo sh -c 'cd /tmp/node-e2e-20190211T223324 && timeout -k 30s 3900.000000s ./ginkgo --nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 ./e2e_node.test -- --system-spec-name= --system-spec-file= --logtostderr --v 4 --node-name=tmp-node-e2e-91a835cb-cos-stable-60-9592-84-0 --report-dir=/tmp/node-e2e-20190211T223324/results --report-prefix=cos-stable2 --image-description="cos-stable-60-9592-84-0" --kubelet-flags=--experimental-mounter-path=/tmp/node-e2e-20190211T223324/mounter --kubelet-flags=--experimental-kernel-memcg-notification=true --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"'] failed with error: exit status 1, command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r prow@35.227.184.191:/tmp/node-e2e-20190211T223324/results/*.log /workspace/_artifacts/tmp-node-e2e-91a835cb-cos-stable-60-9592-84-0] failed with error: exit status 1]
I0211 22:36:08.257] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0211 22:36:08.257] <                              FINISH TEST                               <
I0211 22:36:08.257] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0211 22:36:08.258] 
W0211 22:44:39.406] I0211 22:44:39.405646    4549 remote.go:122] Copying test artifacts from "tmp-node-e2e-91a835cb-cos-stable-63-10032-71-0"
W0211 22:44:44.850] I0211 22:44:44.850451    4549 run_remote.go:717] Deleting instance "tmp-node-e2e-91a835cb-cos-stable-63-10032-71-0"
... skipping 604 lines ...
I0211 22:44:45.667] [BeforeEach] [k8s.io] Security Context
I0211 22:44:45.667]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0211 22:44:45.667] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0211 22:44:45.667]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0211 22:44:45.667] Feb 11 22:35:28.082: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-54c06234-2e4d-11e9-a30d-42010a8a0055" in namespace "security-context-test-6239" to be "success or failure"
I0211 22:44:45.668] Feb 11 22:35:28.096: INFO: Pod "busybox-readonly-true-54c06234-2e4d-11e9-a30d-42010a8a0055": Phase="Pending", Reason="", readiness=false. Elapsed: 13.382252ms
I0211 22:44:45.668] Feb 11 22:35:30.098: INFO: Pod "busybox-readonly-true-54c06234-2e4d-11e9-a30d-42010a8a0055": Phase="Failed", Reason="", readiness=false. Elapsed: 2.015424394s
I0211 22:44:45.668] Feb 11 22:35:30.098: INFO: Pod "busybox-readonly-true-54c06234-2e4d-11e9-a30d-42010a8a0055" satisfied condition "success or failure"
I0211 22:44:45.668] [AfterEach] [k8s.io] Security Context
I0211 22:44:45.668]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:44:45.668] Feb 11 22:35:30.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:44:45.669] STEP: Destroying namespace "security-context-test-6239" for this suite.
I0211 22:44:45.669] Feb 11 22:35:36.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1713 lines ...
I0211 22:44:45.912] STEP: Creating a kubernetes client
I0211 22:44:45.913] STEP: Building a namespace api object, basename container-runtime
I0211 22:44:45.913] Feb 11 22:37:13.490: INFO: Skipping waiting for service account
I0211 22:44:45.913] [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
I0211 22:44:45.913]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:205
I0211 22:44:45.913] STEP: create the container
I0211 22:44:45.913] STEP: wait for the container to reach Failed
I0211 22:44:45.913] STEP: get the container status
I0211 22:44:45.914] STEP: the container should be terminated
I0211 22:44:45.914] STEP: the termination message should be set
I0211 22:44:45.914] STEP: delete the container
I0211 22:44:45.914] [AfterEach] [k8s.io] Container Runtime
I0211 22:44:45.914]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
... skipping 668 lines ...
I0211 22:44:46.012] STEP: submitting the pod to kubernetes
I0211 22:44:46.012] STEP: verifying the pod is in kubernetes
I0211 22:44:46.012] STEP: updating the pod
I0211 22:44:46.012] Feb 11 22:38:12.165: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b3d4b6e4-2e4d-11e9-9384-42010a8a0055"
I0211 22:44:46.012] Feb 11 22:38:12.165: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b3d4b6e4-2e4d-11e9-9384-42010a8a0055" in namespace "pods-8049" to be "terminated due to deadline exceeded"
I0211 22:44:46.013] Feb 11 22:38:12.166: INFO: Pod "pod-update-activedeadlineseconds-b3d4b6e4-2e4d-11e9-9384-42010a8a0055": Phase="Running", Reason="", readiness=true. Elapsed: 1.566521ms
I0211 22:44:46.013] Feb 11 22:38:14.196: INFO: Pod "pod-update-activedeadlineseconds-b3d4b6e4-2e4d-11e9-9384-42010a8a0055": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.031498017s
I0211 22:44:46.013] Feb 11 22:38:14.196: INFO: Pod "pod-update-activedeadlineseconds-b3d4b6e4-2e4d-11e9-9384-42010a8a0055" satisfied condition "terminated due to deadline exceeded"
I0211 22:44:46.013] [AfterEach] [k8s.io] Pods
I0211 22:44:46.013]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:44:46.014] Feb 11 22:38:14.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:44:46.014] STEP: Destroying namespace "pods-8049" for this suite.
I0211 22:44:46.014] Feb 11 22:38:20.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 434 lines ...
I0211 22:44:46.073]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:44:46.073] STEP: Creating a kubernetes client
I0211 22:44:46.073] STEP: Building a namespace api object, basename init-container
I0211 22:44:46.073] Feb 11 22:38:50.443: INFO: Skipping waiting for service account
I0211 22:44:46.073] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:44:46.074]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 22:44:46.074] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 22:44:46.074]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:44:46.074] STEP: creating the pod
I0211 22:44:46.074] Feb 11 22:38:50.443: INFO: PodSpec: initContainers in spec.initContainers
I0211 22:44:46.074] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:44:46.075]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:44:46.075] Feb 11 22:38:53.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0211 22:44:46.075] Feb 11 22:38:59.694: INFO: namespace init-container-6303 deletion completed in 6.053006719s
I0211 22:44:46.075] 
I0211 22:44:46.075] 
I0211 22:44:46.075] • [SLOW TEST:9.254 seconds]
I0211 22:44:46.076] [k8s.io] InitContainer [NodeConformance]
I0211 22:44:46.076] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 22:44:46.076]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 22:44:46.076]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:44:46.076] ------------------------------
I0211 22:44:46.076] S
I0211 22:44:46.076] ------------------------------
I0211 22:44:46.077] [BeforeEach] [k8s.io] Container Runtime
I0211 22:44:46.077]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
... skipping 1701 lines ...
I0211 22:44:46.314]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:44:46.314] STEP: Creating a kubernetes client
I0211 22:44:46.315] STEP: Building a namespace api object, basename init-container
I0211 22:44:46.315] Feb 11 22:41:36.921: INFO: Skipping waiting for service account
I0211 22:44:46.315] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:44:46.315]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 22:44:46.315] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 22:44:46.315]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:44:46.315] STEP: creating the pod
I0211 22:44:46.316] Feb 11 22:41:36.921: INFO: PodSpec: initContainers in spec.initContainers
I0211 22:44:46.320] Feb 11 22:42:19.441: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3099aa63-2e4e-11e9-8b28-42010a8a0055", GenerateName:"", Namespace:"init-container-418", SelfLink:"/api/v1/namespaces/init-container-418/pods/pod-init-3099aa63-2e4e-11e9-8b28-42010a8a0055", UID:"30a1a5ea-2e4e-11e9-b8ed-42010a8a0055", ResourceVersion:"3313", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685521696, loc:(*time.Location)(0xa2319e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"921460074"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0012be630), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-91a835cb-cos-stable-63-10032-71-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b2c000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0012be6a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0012be6c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0012be6d0), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0012be6d4)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521696, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521696, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521696, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521696, loc:(*time.Location)(0xa2319e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.85", PodIP:"10.100.0.179", StartTime:(*v1.Time)(0xc000b3a700), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0000fc0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0000fc150)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://9f86193ae685f0222685aa044e75da6b18a5344d7263f14e3e5b8a85978f085a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000b3a8e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000b3a9c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0211 22:44:46.320] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:44:46.320]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:44:46.320] Feb 11 22:42:19.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:44:46.321] STEP: Destroying namespace "init-container-418" for this suite.
I0211 22:44:46.321] Feb 11 22:42:41.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0211 22:44:46.321] Feb 11 22:42:41.513: INFO: namespace init-container-418 deletion completed in 22.065686594s
I0211 22:44:46.321] 
I0211 22:44:46.321] 
I0211 22:44:46.321] • [SLOW TEST:64.599 seconds]
I0211 22:44:46.321] [k8s.io] InitContainer [NodeConformance]
I0211 22:44:46.321] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 22:44:46.322]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 22:44:46.322]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:44:46.322] ------------------------------
I0211 22:44:46.322] [BeforeEach] [k8s.io] Container Runtime Conformance Test
I0211 22:44:46.322]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:44:46.322] STEP: Creating a kubernetes client
I0211 22:44:46.322] STEP: Building a namespace api object, basename runtime-conformance
I0211 22:44:46.323] Feb 11 22:39:28.388: INFO: Skipping waiting for service account
I0211 22:44:46.323] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0211 22:44:46.323]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0211 22:44:46.323] STEP: create the container
I0211 22:44:46.323] STEP: check the container status
I0211 22:44:46.323] STEP: delete the container
I0211 22:44:46.323] Feb 11 22:44:29.269: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0211 22:44:46.324] STEP: create the container
I0211 22:44:46.324] STEP: check the container status
I0211 22:44:46.324] STEP: delete the container
I0211 22:44:46.324] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0211 22:44:46.324]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:44:46.324] Feb 11 22:44:31.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
I0211 22:44:46.326]       should be able to pull from private registry with credential provider [NodeConformance]
I0211 22:44:46.326]       _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0211 22:44:46.326] ------------------------------
I0211 22:44:46.326] I0211 22:44:37.373208    1292 e2e_node_suite_test.go:185] Stopping node services...
I0211 22:44:46.327] I0211 22:44:37.373239    1292 server.go:258] Kill server "services"
I0211 22:44:46.327] I0211 22:44:37.373250    1292 server.go:295] Killing process 1790 (services) with -TERM
I0211 22:44:46.327] E0211 22:44:37.536723    1292 services.go:89] Failed to stop services: error stopping "services": waitid: no child processes
I0211 22:44:46.327] I0211 22:44:37.536745    1292 server.go:258] Kill server "kubelet"
I0211 22:44:46.327] I0211 22:44:37.546947    1292 services.go:146] Fetching log files...
I0211 22:44:46.327] I0211 22:44:37.547001    1292 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0211 22:44:46.327] I0211 22:44:38.257179    1292 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0211 22:44:46.328] I0211 22:44:38.291743    1292 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190211T223324.service].
I0211 22:44:46.328] I0211 22:44:39.296881    1292 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0211 22:44:46.328] I0211 22:44:39.349667    1292 e2e_node_suite_test.go:190] Tests Finished
I0211 22:44:46.328] 
I0211 22:44:46.328] 
I0211 22:44:46.328] Ran 156 of 286 Specs in 657.294 seconds
I0211 22:44:46.328] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 130 Skipped 
I0211 22:44:46.329] 
I0211 22:44:46.329] Ginkgo ran 1 suite in 11m1.875192999s
I0211 22:44:46.329] Test Suite Passed
I0211 22:44:46.329] 
I0211 22:44:46.329] Success Finished Test Suite on Host tmp-node-e2e-91a835cb-cos-stable-63-10032-71-0
I0211 22:44:46.329] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 54 lines ...
I0211 22:48:09.949] Validating docker...
I0211 22:48:09.949] DOCKER_VERSION: 17.03.2-ce
I0211 22:48:09.949] DOCKER_GRAPH_DRIVER: overlay2
I0211 22:48:09.949] PASS
I0211 22:48:09.949] I0211 22:33:41.369365    2666 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0211 22:48:09.950] I0211 22:33:41.369390    2666 image_list.go:131] Pre-pulling images with docker [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.1 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.4.1 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0211 22:48:09.950] W0211 22:34:01.556211    2666 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/hostexec:1.1 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 22:48:09.951] W0211 22:34:34.336219    2666 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 22:48:09.951] I0211 22:35:47.192994    2666 kubelet.go:108] Starting kubelet
I0211 22:48:09.951] I0211 22:35:47.193084    2666 feature_gate.go:226] feature gates: &{map[]}
I0211 22:48:09.952] I0211 22:35:47.195571    2666 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run --unit=kubelet-20190211T223324.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20190211T223324/kubelet --kubeconfig /tmp/node-e2e-20190211T223324/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --allow-privileged=true --dynamic-config-dir /tmp/node-e2e-20190211T223324/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20190211T223324/cni/bin --cni-conf-dir /tmp/node-e2e-20190211T223324/cni/net.d --hostname-override tmp-node-e2e-91a835cb-ubuntu-gke-1804-d1703-0-v20181113 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20190211T223324/kubelet-config --experimental-kernel-memcg-notification=true --cgroups-per-qos=true --cgroup-root=/"
I0211 22:48:09.952] I0211 22:35:47.195639    2666 util.go:44] Running readiness check for service "kubelet"
I0211 22:48:09.952] I0211 22:35:47.195720    2666 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20190211T223324/results/kubelet.log
I0211 22:48:09.952] I0211 22:35:47.198735    2666 server.go:172] Running health check for service "kubelet"
... skipping 1096 lines ...
I0211 22:48:10.073] STEP: verifying the pod is in kubernetes
I0211 22:48:10.074] STEP: updating the pod
I0211 22:48:10.074] Feb 11 22:37:13.664: INFO: Successfully updated pod "pod-update-activedeadlineseconds-92225f1a-2e4d-11e9-b4a9-42010a8a0057"
I0211 22:48:10.074] Feb 11 22:37:13.664: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-92225f1a-2e4d-11e9-b4a9-42010a8a0057" in namespace "pods-8864" to be "terminated due to deadline exceeded"
I0211 22:48:10.074] Feb 11 22:37:13.666: INFO: Pod "pod-update-activedeadlineseconds-92225f1a-2e4d-11e9-b4a9-42010a8a0057": Phase="Running", Reason="", readiness=true. Elapsed: 1.703994ms
I0211 22:48:10.074] Feb 11 22:37:15.671: INFO: Pod "pod-update-activedeadlineseconds-92225f1a-2e4d-11e9-b4a9-42010a8a0057": Phase="Running", Reason="", readiness=true. Elapsed: 2.006930102s
I0211 22:48:10.074] Feb 11 22:37:17.673: INFO: Pod "pod-update-activedeadlineseconds-92225f1a-2e4d-11e9-b4a9-42010a8a0057": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.009049803s
I0211 22:48:10.074] Feb 11 22:37:17.673: INFO: Pod "pod-update-activedeadlineseconds-92225f1a-2e4d-11e9-b4a9-42010a8a0057" satisfied condition "terminated due to deadline exceeded"
I0211 22:48:10.074] [AfterEach] [k8s.io] Pods
I0211 22:48:10.075]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:48:10.075] Feb 11 22:37:17.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:48:10.075] STEP: Destroying namespace "pods-8864" for this suite.
I0211 22:48:10.075] Feb 11 22:37:25.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 338 lines ...
I0211 22:48:10.110] Feb 11 22:37:27.742: INFO: Pod "pod9ae009ba-2e4d-11e9-b4a9-42010a8a0057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011712208s
I0211 22:48:10.110] STEP: Saw pod success
I0211 22:48:10.110] Feb 11 22:37:27.742: INFO: Pod "pod9ae009ba-2e4d-11e9-b4a9-42010a8a0057" satisfied condition "success or failure"
I0211 22:48:10.110] STEP: Verifying the memory backed volume was removed from node
I0211 22:48:10.111] Feb 11 22:37:27.747: INFO: Waiting up to 5m0s for pod "pod9c14088c-2e4d-11e9-b4a9-42010a8a0057" in namespace "kubelet-volume-manager-7955" to be "success or failure"
I0211 22:48:10.111] Feb 11 22:37:27.755: INFO: Pod "pod9c14088c-2e4d-11e9-b4a9-42010a8a0057": Phase="Pending", Reason="", readiness=false. Elapsed: 7.171811ms
I0211 22:48:10.111] Feb 11 22:37:29.757: INFO: Pod "pod9c14088c-2e4d-11e9-b4a9-42010a8a0057": Phase="Failed", Reason="", readiness=false. Elapsed: 2.009307944s
I0211 22:48:10.111] Feb 11 22:37:39.780: INFO: Waiting up to 5m0s for pod "poda33fe6e9-2e4d-11e9-b4a9-42010a8a0057" in namespace "kubelet-volume-manager-7955" to be "success or failure"
I0211 22:48:10.111] Feb 11 22:37:39.787: INFO: Pod "poda33fe6e9-2e4d-11e9-b4a9-42010a8a0057": Phase="Pending", Reason="", readiness=false. Elapsed: 6.400106ms
I0211 22:48:10.111] Feb 11 22:37:41.788: INFO: Pod "poda33fe6e9-2e4d-11e9-b4a9-42010a8a0057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008203156s
I0211 22:48:10.111] STEP: Saw pod success
I0211 22:48:10.111] Feb 11 22:37:41.788: INFO: Pod "poda33fe6e9-2e4d-11e9-b4a9-42010a8a0057" satisfied condition "success or failure"
I0211 22:48:10.112] [AfterEach] [k8s.io] Kubelet Volume Manager
... skipping 346 lines ...
I0211 22:48:10.146]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:48:10.146] STEP: Creating a kubernetes client
I0211 22:48:10.146] STEP: Building a namespace api object, basename init-container
I0211 22:48:10.146] Feb 11 22:38:18.097: INFO: Skipping waiting for service account
I0211 22:48:10.147] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:48:10.147]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 22:48:10.147] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 22:48:10.147]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:48:10.147] STEP: creating the pod
I0211 22:48:10.147] Feb 11 22:38:18.097: INFO: PodSpec: initContainers in spec.initContainers
I0211 22:48:10.147] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:48:10.147]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:48:10.147] Feb 11 22:38:21.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0211 22:48:10.148] Feb 11 22:38:27.158: INFO: namespace init-container-6791 deletion completed in 6.073900937s
I0211 22:48:10.148] 
I0211 22:48:10.148] 
I0211 22:48:10.148] • [SLOW TEST:9.064 seconds]
I0211 22:48:10.148] [k8s.io] InitContainer [NodeConformance]
I0211 22:48:10.148] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 22:48:10.148]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 22:48:10.148]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:48:10.148] ------------------------------
I0211 22:48:10.149] [BeforeEach] [sig-storage] Projected secret
I0211 22:48:10.149]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:48:10.149] STEP: Creating a kubernetes client
I0211 22:48:10.149] STEP: Building a namespace api object, basename projected
... skipping 1042 lines ...
I0211 22:48:10.255] STEP: Creating a kubernetes client
I0211 22:48:10.255] STEP: Building a namespace api object, basename container-runtime
I0211 22:48:10.255] Feb 11 22:39:35.598: INFO: Skipping waiting for service account
I0211 22:48:10.255] [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
I0211 22:48:10.255]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:205
I0211 22:48:10.255] STEP: create the container
I0211 22:48:10.256] STEP: wait for the container to reach Failed
I0211 22:48:10.256] STEP: get the container status
I0211 22:48:10.256] STEP: the container should be terminated
I0211 22:48:10.256] STEP: the termination message should be set
I0211 22:48:10.256] STEP: delete the container
I0211 22:48:10.256] [AfterEach] [k8s.io] Container Runtime
I0211 22:48:10.256]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
... skipping 733 lines ...
I0211 22:48:10.332]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:48:10.332] STEP: Creating a kubernetes client
I0211 22:48:10.332] STEP: Building a namespace api object, basename init-container
I0211 22:48:10.332] Feb 11 22:40:40.486: INFO: Skipping waiting for service account
I0211 22:48:10.333] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:48:10.333]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 22:48:10.333] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 22:48:10.333]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:48:10.333] STEP: creating the pod
I0211 22:48:10.333] Feb 11 22:40:40.486: INFO: PodSpec: initContainers in spec.initContainers
I0211 22:48:10.337] Feb 11 22:41:25.432: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0ef65f5a-2e4e-11e9-aeef-42010a8a0057", GenerateName:"", Namespace:"init-container-3839", SelfLink:"/api/v1/namespaces/init-container-3839/pods/pod-init-0ef65f5a-2e4e-11e9-aeef-42010a8a0057", UID:"0ef6ac12-2e4e-11e9-99a3-42010a8a0057", ResourceVersion:"2448", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685521640, loc:(*time.Location)(0xa2319e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"486489535", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ee55d0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-91a835cb-ubuntu-gke-1804-d1703-0-v20181113", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0012b0d80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ee5650)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ee5670)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000ee5680), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000ee5684)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521640, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521640, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521640, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521640, loc:(*time.Location)(0xa2319e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.87", PodIP:"10.100.0.133", StartTime:(*v1.Time)(0xc001119120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0004dd3b0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0004dd420)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://c19dc6d866a2064be445b2618ba84f7606760b39aab7490f184abd9ee4a57f10"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001119180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0011191c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0211 22:48:10.337] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:48:10.337]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:48:10.338] Feb 11 22:41:25.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:48:10.338] STEP: Destroying namespace "init-container-3839" for this suite.
I0211 22:48:10.338] Feb 11 22:41:47.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0211 22:48:10.338] Feb 11 22:41:47.499: INFO: namespace init-container-3839 deletion completed in 22.058232308s
I0211 22:48:10.338] 
I0211 22:48:10.338] 
I0211 22:48:10.338] • [SLOW TEST:67.017 seconds]
I0211 22:48:10.338] [k8s.io] InitContainer [NodeConformance]
I0211 22:48:10.338] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 22:48:10.338]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 22:48:10.339]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:48:10.339] ------------------------------
I0211 22:48:10.339] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:48:10.339]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:48:10.339] STEP: Creating a kubernetes client
I0211 22:48:10.339] STEP: Building a namespace api object, basename init-container
... skipping 172 lines ...
I0211 22:48:10.357] [BeforeEach] [k8s.io] Security Context
I0211 22:48:10.357]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0211 22:48:10.357] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0211 22:48:10.357]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0211 22:48:10.358] Feb 11 22:41:55.721: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-3bc56ae4-2e4e-11e9-aeef-42010a8a0057" in namespace "security-context-test-3223" to be "success or failure"
I0211 22:48:10.358] Feb 11 22:41:55.730: INFO: Pod "busybox-readonly-true-3bc56ae4-2e4e-11e9-aeef-42010a8a0057": Phase="Pending", Reason="", readiness=false. Elapsed: 8.463005ms
I0211 22:48:10.358] Feb 11 22:41:57.732: INFO: Pod "busybox-readonly-true-3bc56ae4-2e4e-11e9-aeef-42010a8a0057": Phase="Failed", Reason="", readiness=false. Elapsed: 2.010457055s
I0211 22:48:10.358] Feb 11 22:41:57.732: INFO: Pod "busybox-readonly-true-3bc56ae4-2e4e-11e9-aeef-42010a8a0057" satisfied condition "success or failure"
I0211 22:48:10.358] [AfterEach] [k8s.io] Security Context
I0211 22:48:10.358]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:48:10.358] Feb 11 22:41:57.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:48:10.359] STEP: Destroying namespace "security-context-test-3223" for this suite.
I0211 22:48:10.359] Feb 11 22:42:03.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1222 lines ...
I0211 22:48:10.505] Feb 11 22:39:41.665: INFO: Skipping waiting for service account
I0211 22:48:10.505] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0211 22:48:10.505]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0211 22:48:10.505] STEP: create the container
I0211 22:48:10.505] STEP: check the container status
I0211 22:48:10.505] STEP: delete the container
I0211 22:48:10.506] Feb 11 22:44:42.619: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0211 22:48:10.506] STEP: create the container
I0211 22:48:10.506] STEP: check the container status
I0211 22:48:10.506] STEP: delete the container
I0211 22:48:10.506] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0211 22:48:10.506]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:48:10.506] Feb 11 22:44:44.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 47 lines ...
I0211 22:48:10.513] Feb 11 22:42:53.690: INFO: Skipping waiting for service account
I0211 22:48:10.513] [It] should not be able to pull from private registry without secret [NodeConformance]
I0211 22:48:10.513]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:302
I0211 22:48:10.513] STEP: create the container
I0211 22:48:10.514] STEP: check the container status
I0211 22:48:10.514] STEP: delete the container
I0211 22:48:10.514] Feb 11 22:47:54.381: INFO: No.1 attempt failed: expected container state: Waiting, got: "Running", retrying...
I0211 22:48:10.514] STEP: create the container
I0211 22:48:10.514] STEP: check the container status
I0211 22:48:10.514] STEP: delete the container
I0211 22:48:10.514] [AfterEach] [k8s.io] Container Runtime
I0211 22:48:10.514]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:48:10.515] Feb 11 22:47:56.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 22 lines ...
I0211 22:48:10.518] I0211 22:48:02.622415    2666 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0211 22:48:10.518] I0211 22:48:02.642552    2666 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190211T223324.service].
I0211 22:48:10.518] I0211 22:48:03.182734    2666 e2e_node_suite_test.go:190] Tests Finished
I0211 22:48:10.518] 
I0211 22:48:10.518] 
I0211 22:48:10.518] Ran 156 of 286 Specs in 862.377 seconds
I0211 22:48:10.519] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 130 Skipped 
I0211 22:48:10.519] 
I0211 22:48:10.519] Ginkgo ran 1 suite in 14m24.631741739s
I0211 22:48:10.519] Test Suite Passed
I0211 22:48:10.519] 
I0211 22:48:10.519] Success Finished Test Suite on Host tmp-node-e2e-91a835cb-ubuntu-gke-1804-d1703-0-v20181113
I0211 22:48:10.519] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 56 lines ...
I0211 22:51:20.851] Validating docker...
I0211 22:51:20.851] DOCKER_VERSION: 18.06.1-ce
I0211 22:51:20.852] DOCKER_GRAPH_DRIVER: overlay2
I0211 22:51:20.852] PASS
I0211 22:51:20.852] I0211 22:33:39.685270    1306 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0211 22:51:20.853] I0211 22:33:39.685294    1306 image_list.go:131] Pre-pulling images with docker [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.1 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.4.1 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0211 22:51:20.853] W0211 22:33:59.811309    1306 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 22:51:20.853] W0211 22:34:15.931423    1306 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (2 of 5): exit status 1
I0211 22:51:20.853] W0211 22:34:32.029223    1306 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (3 of 5): exit status 1
I0211 22:51:20.854] W0211 22:35:03.287934    1306 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (4 of 5): exit status 1
I0211 22:51:20.854] W0211 22:35:50.030769    1306 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/hostexec:1.1 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 22:51:20.854] W0211 22:36:56.235195    1306 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/netexec:1.1 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 22:51:20.854] I0211 22:38:45.667504    1306 e2e_node_suite_test.go:219] Locksmithd is masked successfully
I0211 22:51:20.854] I0211 22:38:45.667527    1306 kubelet.go:108] Starting kubelet
I0211 22:51:20.854] I0211 22:38:45.667583    1306 feature_gate.go:226] feature gates: &{map[]}
I0211 22:51:20.855] I0211 22:38:45.702481    1306 server.go:102] Starting server "kubelet" with command "/bin/systemd-run --unit=kubelet-20190211T223324.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20190211T223324/kubelet --kubeconfig /tmp/node-e2e-20190211T223324/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --allow-privileged=true --dynamic-config-dir /tmp/node-e2e-20190211T223324/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20190211T223324/cni/bin --cni-conf-dir /tmp/node-e2e-20190211T223324/cni/net.d --hostname-override tmp-node-e2e-91a835cb-coreos-beta-1883-1-0-v20180911 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20190211T223324/kubelet-config --cgroups-per-qos=true --cgroup-root=/"
I0211 22:51:20.855] I0211 22:38:45.702511    1306 util.go:44] Running readiness check for service "kubelet"
I0211 22:51:20.855] I0211 22:38:45.702591    1306 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20190211T223324/results/kubelet.log
... skipping 426 lines ...
I0211 22:51:20.901]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:51:20.901] STEP: Creating a kubernetes client
I0211 22:51:20.901] STEP: Building a namespace api object, basename init-container
I0211 22:51:20.901] Feb 11 22:39:20.521: INFO: Skipping waiting for service account
I0211 22:51:20.902] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:51:20.902]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 22:51:20.902] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 22:51:20.902]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:51:20.902] STEP: creating the pod
I0211 22:51:20.902] Feb 11 22:39:20.521: INFO: PodSpec: initContainers in spec.initContainers
I0211 22:51:20.902] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:51:20.902]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:51:20.902] Feb 11 22:39:23.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0211 22:51:20.903] Feb 11 22:39:29.415: INFO: namespace init-container-2122 deletion completed in 6.199477251s
I0211 22:51:20.903] 
I0211 22:51:20.903] 
I0211 22:51:20.903] • [SLOW TEST:8.896 seconds]
I0211 22:51:20.903] [k8s.io] InitContainer [NodeConformance]
I0211 22:51:20.903] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 22:51:20.903]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 22:51:20.903]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:51:20.904] ------------------------------
I0211 22:51:20.904] [BeforeEach] [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv]
I0211 22:51:20.904]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gke_environment_test.go:315
I0211 22:51:20.904] Feb 11 22:39:29.418: INFO: Skipped because system spec name "" is not in [gke]
I0211 22:51:20.904] 
... skipping 3013 lines ...
I0211 22:51:21.288] STEP: submitting the pod to kubernetes
I0211 22:51:21.288] STEP: verifying the pod is in kubernetes
I0211 22:51:21.289] STEP: updating the pod
I0211 22:51:21.289] Feb 11 22:43:20.766: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6bc868b2-2e4e-11e9-bc09-42010a8a0054"
I0211 22:51:21.289] Feb 11 22:43:20.766: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6bc868b2-2e4e-11e9-bc09-42010a8a0054" in namespace "pods-4148" to be "terminated due to deadline exceeded"
I0211 22:51:21.289] Feb 11 22:43:20.768: INFO: Pod "pod-update-activedeadlineseconds-6bc868b2-2e4e-11e9-bc09-42010a8a0054": Phase="Running", Reason="", readiness=true. Elapsed: 1.401933ms
I0211 22:51:21.289] Feb 11 22:43:22.771: INFO: Pod "pod-update-activedeadlineseconds-6bc868b2-2e4e-11e9-bc09-42010a8a0054": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.004322996s
I0211 22:51:21.289] Feb 11 22:43:22.771: INFO: Pod "pod-update-activedeadlineseconds-6bc868b2-2e4e-11e9-bc09-42010a8a0054" satisfied condition "terminated due to deadline exceeded"
I0211 22:51:21.290] [AfterEach] [k8s.io] Pods
I0211 22:51:21.290]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:51:21.290] Feb 11 22:43:22.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:51:21.290] STEP: Destroying namespace "pods-4148" for this suite.
I0211 22:51:21.290] Feb 11 22:43:28.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 10 lines ...
I0211 22:51:21.291]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:51:21.291] STEP: Creating a kubernetes client
I0211 22:51:21.292] STEP: Building a namespace api object, basename init-container
I0211 22:51:21.292] Feb 11 22:42:20.295: INFO: Skipping waiting for service account
I0211 22:51:21.292] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:51:21.292]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 22:51:21.292] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 22:51:21.292]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:51:21.292] STEP: creating the pod
I0211 22:51:21.293] Feb 11 22:42:20.295: INFO: PodSpec: initContainers in spec.initContainers
I0211 22:51:21.297] Feb 11 22:43:08.666: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4a74124f-2e4e-11e9-be4c-42010a8a0054", GenerateName:"", Namespace:"init-container-4960", SelfLink:"/api/v1/namespaces/init-container-4960/pods/pod-init-4a74124f-2e4e-11e9-be4c-42010a8a0054", UID:"4a745f7a-2e4e-11e9-bb90-42010a8a0054", ResourceVersion:"2176", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685521740, loc:(*time.Location)(0xa2319e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"295845648", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0009065e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-91a835cb-coreos-beta-1883-1-0-v20180911", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000ecd860), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000906650)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000906680)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000906690), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000906694)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521740, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521740, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521740, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685521740, loc:(*time.Location)(0xa2319e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.84", PodIP:"10.100.0.94", StartTime:(*v1.Time)(0xc0008c13a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001495f80)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001438000)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://5e235f5ae950fbf9fbb602174939e7c1199755266c23b20532d75d6ad326e40f"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0008c1440), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0008c1480), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0211 22:51:21.297] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 22:51:21.297]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:51:21.297] Feb 11 22:43:08.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:51:21.297] STEP: Destroying namespace "init-container-4960" for this suite.
I0211 22:51:21.298] Feb 11 22:43:30.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0211 22:51:21.298] Feb 11 22:43:30.774: INFO: namespace init-container-4960 deletion completed in 22.100796016s
I0211 22:51:21.298] 
I0211 22:51:21.298] 
I0211 22:51:21.298] • [SLOW TEST:70.482 seconds]
I0211 22:51:21.298] [k8s.io] InitContainer [NodeConformance]
I0211 22:51:21.298] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 22:51:21.298]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 22:51:21.299]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 22:51:21.299] ------------------------------
I0211 22:51:21.299] [BeforeEach] [k8s.io] Pods
I0211 22:51:21.299]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 22:51:21.299] STEP: Creating a kubernetes client
I0211 22:51:21.299] STEP: Building a namespace api object, basename pods
... skipping 347 lines ...
I0211 22:51:21.346] [BeforeEach] [k8s.io] Security Context
I0211 22:51:21.347]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0211 22:51:21.347] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0211 22:51:21.347]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0211 22:51:21.347] Feb 11 22:44:04.323: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-886c5538-2e4e-11e9-b790-42010a8a0054" in namespace "security-context-test-2098" to be "success or failure"
I0211 22:51:21.347] Feb 11 22:44:04.324: INFO: Pod "busybox-readonly-true-886c5538-2e4e-11e9-b790-42010a8a0054": Phase="Pending", Reason="", readiness=false. Elapsed: 1.250938ms
I0211 22:51:21.347] Feb 11 22:44:06.326: INFO: Pod "busybox-readonly-true-886c5538-2e4e-11e9-b790-42010a8a0054": Phase="Failed", Reason="", readiness=false. Elapsed: 2.003006308s
I0211 22:51:21.348] Feb 11 22:44:06.326: INFO: Pod "busybox-readonly-true-886c5538-2e4e-11e9-b790-42010a8a0054" satisfied condition "success or failure"
I0211 22:51:21.348] [AfterEach] [k8s.io] Security Context
I0211 22:51:21.348]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:51:21.348] Feb 11 22:44:06.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 22:51:21.348] STEP: Destroying namespace "security-context-test-2098" for this suite.
I0211 22:51:21.348] Feb 11 22:44:14.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 797 lines ...
I0211 22:51:21.460] STEP: Creating a kubernetes client
I0211 22:51:21.460] STEP: Building a namespace api object, basename container-runtime
I0211 22:51:21.461] Feb 11 22:45:31.381: INFO: Skipping waiting for service account
I0211 22:51:21.461] [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
I0211 22:51:21.461]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:205
I0211 22:51:21.461] STEP: create the container
I0211 22:51:21.461] STEP: wait for the container to reach Failed
I0211 22:51:21.461] STEP: get the container status
I0211 22:51:21.462] STEP: the container should be terminated
I0211 22:51:21.462] STEP: the termination message should be set
I0211 22:51:21.462] STEP: delete the container
I0211 22:51:21.462] [AfterEach] [k8s.io] Container Runtime
I0211 22:51:21.462]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
... skipping 485 lines ...
I0211 22:51:21.537] Feb 11 22:46:04.899: INFO: Skipping waiting for service account
I0211 22:51:21.537] [It] should not be able to pull from private registry without secret [NodeConformance]
I0211 22:51:21.538]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:302
I0211 22:51:21.538] STEP: create the container
I0211 22:51:21.538] STEP: check the container status
I0211 22:51:21.538] STEP: delete the container
I0211 22:51:21.538] Feb 11 22:51:05.606: INFO: No.1 attempt failed: expected container state: Waiting, got: "Running", retrying...
I0211 22:51:21.538] STEP: create the container
I0211 22:51:21.538] STEP: check the container status
I0211 22:51:21.538] STEP: delete the container
I0211 22:51:21.539] [AfterEach] [k8s.io] Container Runtime
I0211 22:51:21.539]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 22:51:21.539] Feb 11 22:51:07.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 17 lines ...
I0211 22:51:21.542] I0211 22:51:13.728728    1306 server.go:295] Killing process 2119 (services) with -TERM
I0211 22:51:21.542] I0211 22:51:13.839452    1306 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
I0211 22:51:21.542] I0211 22:51:13.844149    1306 server.go:258] Kill server "kubelet"
I0211 22:51:21.542] I0211 22:51:13.853206    1306 services.go:146] Fetching log files...
I0211 22:51:21.542] I0211 22:51:13.853289    1306 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0211 22:51:21.543] I0211 22:51:14.045928    1306 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0211 22:51:21.543] E0211 22:51:14.050501    1306 services.go:158] failed to get "cloud-init.log" from journald: Failed to add filter for units: No data available
I0211 22:51:21.543] , exit status 1
I0211 22:51:21.543] I0211 22:51:14.050537    1306 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0211 22:51:21.543] I0211 22:51:14.061845    1306 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190211T223324.service].
I0211 22:51:21.544] I0211 22:51:14.078425    1306 e2e_node_suite_test.go:190] Tests Finished
I0211 22:51:21.544] 
I0211 22:51:21.544] 
I0211 22:51:21.544] Ran 156 of 284 Specs in 1054.811 seconds
I0211 22:51:21.544] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 128 Skipped 
I0211 22:51:21.544] 
I0211 22:51:21.544] Ginkgo ran 1 suite in 17m37.226684416s
I0211 22:51:21.544] Test Suite Passed
I0211 22:51:21.545] 
I0211 22:51:21.545] Success Finished Test Suite on Host tmp-node-e2e-91a835cb-coreos-beta-1883-1-0-v20180911
I0211 22:51:21.545] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 5 lines ...
W0211 22:51:21.648] 2019/02/11 22:51:21 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml' finished in 21m46.984104534s
W0211 22:51:21.649] 2019/02/11 22:51:21 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0211 22:51:21.649] 2019/02/11 22:51:21 node.go:52: Noop - Node Down()
W0211 22:51:21.649] 2019/02/11 22:51:21 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0211 22:51:21.649] 2019/02/11 22:51:21 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0211 22:51:22.066] 2019/02/11 22:51:22 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 416.826721ms
W0211 22:51:22.067] 2019/02/11 22:51:22 main.go:297: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1]
W0211 22:51:22.071] Traceback (most recent call last):
W0211 22:51:22.072]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0211 22:51:22.072]     main(parse_args())
W0211 22:51:22.072]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0211 22:51:22.072]     mode.start(runner_args)
W0211 22:51:22.073]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0211 22:51:22.073]     check_env(env, self.command, *args)
W0211 22:51:22.073]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0211 22:51:22.073]     subprocess.check_call(cmd, env=env)
W0211 22:51:22.073]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0211 22:51:22.074]     raise CalledProcessError(retcode, cmd)
W0211 22:51:22.074] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-project=k8s-jkns-pr-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]" --skip="\\[Flaky\\]|\\[Slow\\]|\\[Serial\\]" --flakeAttempts=2', '--timeout=65m', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml')' returned non-zero exit status 1
E0211 22:51:22.086] Command failed
I0211 22:51:22.086] process 491 exited with code 1 after 21.8m
E0211 22:51:22.086] FAIL: pull-kubernetes-node-e2e
I0211 22:51:22.087] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0211 22:51:22.718] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0211 22:51:22.774] process 45048 exited with code 0 after 0.0m
I0211 22:51:22.774] Call:  gcloud config get-value account
I0211 22:51:23.077] process 45060 exited with code 0 after 0.0m
I0211 22:51:23.077] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0211 22:51:23.078] Upload result and artifacts...
I0211 22:51:23.078] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73033/pull-kubernetes-node-e2e/119387
I0211 22:51:23.078] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73033/pull-kubernetes-node-e2e/119387/artifacts
W0211 22:51:24.151] CommandException: One or more URLs matched no objects.
E0211 22:51:24.274] Command failed
I0211 22:51:24.275] process 45072 exited with code 1 after 0.0m
W0211 22:51:24.275] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73033/pull-kubernetes-node-e2e/119387/artifacts not exist yet
I0211 22:51:24.275] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73033/pull-kubernetes-node-e2e/119387/artifacts
I0211 22:51:27.001] process 45214 exited with code 0 after 0.0m
I0211 22:51:27.002] Call:  git rev-parse HEAD
I0211 22:51:27.006] process 45857 exited with code 0 after 0.0m
... skipping 21 lines ...