This job view page is being replaced by Spyglass soon. Check out the new job view.
PRalculquicondor: Fix cmd/kubelet/app lint issues
ResultFAILURE
Tests 1 failed / 475 succeeded
Started2019-02-11 22:59
Elapsed21m59s
Revision
Buildergke-prow-containerd-pool-99179761-39jm
Refs master:805a9e70
73926:17a63544
podab39d426-2e50-11e9-ad96-0a580a6c081a
infra-commit89e68fa6f
job-versionv1.14.0-alpha.2.538+a7ac987dd4e830
podab39d426-2e50-11e9-ad96-0a580a6c081a
repok8s.io/kubernetes
repo-commita7ac987dd4e83006776cde48196d776e41e21832
repos{u'k8s.io/kubernetes': u'master:805a9e703698d0a8a86f405f861f9e3fd91b29c6,73926:17a635448aaa804a26afca89cad420f9f2e6a7b6'}
revisionv1.14.0-alpha.2.538+a7ac987dd4e830

Test Failures


Node Tests 20m47s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 475 Passed Tests

Show 388 Skipped Tests

Error lines from build-log.txt

... skipping 289 lines ...
W0211 23:04:52.767] I0211 23:04:52.766547    4522 utils.go:55] Install CNI on "tmp-node-e2e-f0644ef8-ubuntu-gke-1804-d1703-0-v20181113"
W0211 23:04:53.376] I0211 23:04:53.375629    4522 utils.go:68] Adding CNI configuration on "tmp-node-e2e-f0644ef8-cos-stable-63-10032-71-0"
W0211 23:04:53.956] I0211 23:04:53.956450    4522 utils.go:82] Configure iptables firewall rules on "tmp-node-e2e-f0644ef8-cos-stable-63-10032-71-0"
W0211 23:04:56.327] I0211 23:04:56.326690    4522 utils.go:117] Killing any existing node processes on "tmp-node-e2e-f0644ef8-cos-stable-63-10032-71-0"
W0211 23:04:57.510] I0211 23:04:57.510394    4522 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0211 23:04:57.511] I0211 23:04:57.510468    4522 node_e2e.go:164] Starting tests on "tmp-node-e2e-f0644ef8-cos-stable-63-10032-71-0"
W0211 23:07:04.658] I0211 23:07:04.658560    4522 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0211 23:07:05.354] I0211 23:07:05.354263    4522 remote.go:202] Got the system logs from journald; copying it back...
W0211 23:07:06.310] I0211 23:07:06.309968    4522 remote.go:122] Copying test artifacts from "tmp-node-e2e-f0644ef8-ubuntu-gke-1804-d1703-0-v20181113"
W0211 23:07:07.699] I0211 23:07:07.699051    4522 run_remote.go:717] Deleting instance "tmp-node-e2e-f0644ef8-ubuntu-gke-1804-d1703-0-v20181113"
I0211 23:07:08.320] 
I0211 23:07:08.320] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0211 23:07:08.321] >                              START TEST                                >
I0211 23:07:08.321] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0211 23:07:08.321] Start Test Suite on Host tmp-node-e2e-f0644ef8-ubuntu-gke-1804-d1703-0-v20181113
I0211 23:07:08.321] 
I0211 23:07:08.321] Failure Finished Test Suite on Host tmp-node-e2e-f0644ef8-ubuntu-gke-1804-d1703-0-v20181113
I0211 23:07:08.322] [failed to install cni plugin on "tmp-node-e2e-f0644ef8-ubuntu-gke-1804-d1703-0-v20181113": command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.197.105.85 -- sudo sh -c 'mkdir -p /tmp/node-e2e-20190211T230445/cni/bin ; curl -s -L https://dl.k8s.io/network-plugins/cni-plugins-amd64-v0.6.0.tgz | tar -xz -C /tmp/node-e2e-20190211T230445/cni/bin'] failed with error: exit status 2 output: "\ngzip: stdin: unexpected end of file\ntar: Child returned status 1\ntar: Error is not recoverable: exiting now\n", command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r prow@35.197.105.85:/tmp/node-e2e-20190211T230445/results/*.log /workspace/_artifacts/tmp-node-e2e-f0644ef8-ubuntu-gke-1804-d1703-0-v20181113] failed with error: exit status 1]
I0211 23:07:08.322] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0211 23:07:08.322] <                              FINISH TEST                               <
I0211 23:07:08.322] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0211 23:07:08.322] 
W0211 23:07:15.536] I0211 23:07:15.536451    4522 remote.go:70] Staging test binaries on "tmp-node-e2e-f0644ef8-coreos-beta-1883-1-0-v20180911"
W0211 23:07:18.462] I0211 23:07:18.461699    4522 remote.go:97] Extracting tar on "tmp-node-e2e-f0644ef8-coreos-beta-1883-1-0-v20180911"
... skipping 1234 lines ...
I0211 23:15:43.615]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 23:15:43.616] STEP: Creating a kubernetes client
I0211 23:15:43.616] STEP: Building a namespace api object, basename init-container
I0211 23:15:43.616] Feb 11 23:06:36.108: INFO: Skipping waiting for service account
I0211 23:15:43.616] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 23:15:43.616]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 23:15:43.616] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 23:15:43.617]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:15:43.617] STEP: creating the pod
I0211 23:15:43.617] Feb 11 23:06:36.108: INFO: PodSpec: initContainers in spec.initContainers
I0211 23:15:43.621] Feb 11 23:07:22.492: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ae2f86c5-2e51-11e9-b3a0-42010a8a005d", GenerateName:"", Namespace:"init-container-8176", SelfLink:"/api/v1/namespaces/init-container-8176/pods/pod-init-ae2f86c5-2e51-11e9-b3a0-42010a8a005d", UID:"ae2fce51-2e51-11e9-91fd-42010a8a005d", ResourceVersion:"650", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685523196, loc:(*time.Location)(0xa2319e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"108976898"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000d44f70), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-f0644ef8-cos-stable-63-10032-71-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0009fa240), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000d44fe0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000d45010)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000d45020), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000d45024)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685523196, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685523196, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685523196, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685523196, loc:(*time.Location)(0xa2319e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.93", PodIP:"10.100.0.15", StartTime:(*v1.Time)(0xc000d0ed40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000dae690)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0010bc000)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://64c395bc3df4a25043683a96213e1d72893d76c3317dff44bba344459f4bb326"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d0eda0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d0ede0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0211 23:15:43.621] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 23:15:43.622]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:15:43.622] Feb 11 23:07:22.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 23:15:43.622] STEP: Destroying namespace "init-container-8176" for this suite.
I0211 23:15:43.622] Feb 11 23:07:44.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0211 23:15:43.622] Feb 11 23:07:44.542: INFO: namespace init-container-8176 deletion completed in 22.046336876s
I0211 23:15:43.623] 
I0211 23:15:43.623] 
I0211 23:15:43.623] • [SLOW TEST:68.438 seconds]
I0211 23:15:43.623] [k8s.io] InitContainer [NodeConformance]
I0211 23:15:43.623] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 23:15:43.623]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 23:15:43.623]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:15:43.624] ------------------------------
I0211 23:15:43.624] SS
I0211 23:15:43.624] ------------------------------
I0211 23:15:43.624] [BeforeEach] [k8s.io] Kubelet Cgroup Manager
I0211 23:15:43.624]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
... skipping 2895 lines ...
I0211 23:15:44.084] STEP: Creating a kubernetes client
I0211 23:15:44.084] STEP: Building a namespace api object, basename container-runtime
I0211 23:15:44.084] Feb 11 23:11:32.147: INFO: Skipping waiting for service account
I0211 23:15:44.084] [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
I0211 23:15:44.084]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:205
I0211 23:15:44.085] STEP: create the container
I0211 23:15:44.085] STEP: wait for the container to reach Failed
I0211 23:15:44.085] STEP: get the container status
I0211 23:15:44.085] STEP: the container should be terminated
I0211 23:15:44.085] STEP: the termination message should be set
I0211 23:15:44.085] STEP: delete the container
I0211 23:15:44.085] [AfterEach] [k8s.io] Container Runtime
I0211 23:15:44.086]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
... skipping 94 lines ...
I0211 23:15:44.100] STEP: verifying the pod is in kubernetes
I0211 23:15:44.100] STEP: updating the pod
I0211 23:15:44.100] Feb 11 23:11:38.909: INFO: Successfully updated pod "pod-update-activedeadlineseconds-61233a69-2e52-11e9-9f2d-42010a8a005d"
I0211 23:15:44.100] Feb 11 23:11:38.909: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-61233a69-2e52-11e9-9f2d-42010a8a005d" in namespace "pods-2671" to be "terminated due to deadline exceeded"
I0211 23:15:44.100] Feb 11 23:11:38.911: INFO: Pod "pod-update-activedeadlineseconds-61233a69-2e52-11e9-9f2d-42010a8a005d": Phase="Running", Reason="", readiness=true. Elapsed: 1.53668ms
I0211 23:15:44.101] Feb 11 23:11:40.954: INFO: Pod "pod-update-activedeadlineseconds-61233a69-2e52-11e9-9f2d-42010a8a005d": Phase="Running", Reason="", readiness=true. Elapsed: 2.045006019s
I0211 23:15:44.101] Feb 11 23:11:42.957: INFO: Pod "pod-update-activedeadlineseconds-61233a69-2e52-11e9-9f2d-42010a8a005d": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.048176897s
I0211 23:15:44.101] Feb 11 23:11:42.957: INFO: Pod "pod-update-activedeadlineseconds-61233a69-2e52-11e9-9f2d-42010a8a005d" satisfied condition "terminated due to deadline exceeded"
I0211 23:15:44.101] [AfterEach] [k8s.io] Pods
I0211 23:15:44.101]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:15:44.101] Feb 11 23:11:42.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 23:15:44.102] STEP: Destroying namespace "pods-2671" for this suite.
I0211 23:15:44.102] Feb 11 23:11:48.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 238 lines ...
I0211 23:15:44.138]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 23:15:44.138] STEP: Creating a kubernetes client
I0211 23:15:44.138] STEP: Building a namespace api object, basename init-container
I0211 23:15:44.139] Feb 11 23:12:00.813: INFO: Skipping waiting for service account
I0211 23:15:44.139] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 23:15:44.139]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 23:15:44.139] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 23:15:44.139]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:15:44.139] STEP: creating the pod
I0211 23:15:44.139] Feb 11 23:12:00.813: INFO: PodSpec: initContainers in spec.initContainers
I0211 23:15:44.139] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 23:15:44.139]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:15:44.140] Feb 11 23:12:04.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0211 23:15:44.140] Feb 11 23:12:10.279: INFO: namespace init-container-4001 deletion completed in 6.048574786s
I0211 23:15:44.140] 
I0211 23:15:44.140] 
I0211 23:15:44.140] • [SLOW TEST:9.469 seconds]
I0211 23:15:44.140] [k8s.io] InitContainer [NodeConformance]
I0211 23:15:44.140] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 23:15:44.141]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 23:15:44.141]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:15:44.141] ------------------------------
I0211 23:15:44.141] SSS
I0211 23:15:44.141] ------------------------------
I0211 23:15:44.141] [BeforeEach] [sig-storage] EmptyDir volumes
I0211 23:15:44.141]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
... skipping 492 lines ...
I0211 23:15:44.212] [BeforeEach] [k8s.io] Security Context
I0211 23:15:44.212]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0211 23:15:44.212] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0211 23:15:44.212]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0211 23:15:44.213] Feb 11 23:12:47.957: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-8bd22118-2e52-11e9-9f2d-42010a8a005d" in namespace "security-context-test-592" to be "success or failure"
I0211 23:15:44.213] Feb 11 23:12:47.965: INFO: Pod "busybox-readonly-true-8bd22118-2e52-11e9-9f2d-42010a8a005d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168006ms
I0211 23:15:44.213] Feb 11 23:12:49.967: INFO: Pod "busybox-readonly-true-8bd22118-2e52-11e9-9f2d-42010a8a005d": Phase="Failed", Reason="", readiness=false. Elapsed: 2.010568013s
I0211 23:15:44.213] Feb 11 23:12:49.967: INFO: Pod "busybox-readonly-true-8bd22118-2e52-11e9-9f2d-42010a8a005d" satisfied condition "success or failure"
I0211 23:15:44.213] [AfterEach] [k8s.io] Security Context
I0211 23:15:44.213]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:15:44.214] Feb 11 23:12:49.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 23:15:44.214] STEP: Destroying namespace "security-context-test-592" for this suite.
I0211 23:15:44.214] Feb 11 23:12:55.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 258 lines ...
I0211 23:15:44.255]   should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
I0211 23:15:44.255]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:15:44.255] ------------------------------
I0211 23:15:44.255] I0211 23:15:35.540084    1313 e2e_node_suite_test.go:185] Stopping node services...
I0211 23:15:44.255] I0211 23:15:35.540173    1313 server.go:258] Kill server "services"
I0211 23:15:44.256] I0211 23:15:35.540186    1313 server.go:295] Killing process 1816 (services) with -TERM
I0211 23:15:44.256] E0211 23:15:35.669179    1313 services.go:89] Failed to stop services: error stopping "services": waitid: no child processes
I0211 23:15:44.256] I0211 23:15:35.669200    1313 server.go:258] Kill server "kubelet"
I0211 23:15:44.256] I0211 23:15:35.681483    1313 services.go:146] Fetching log files...
I0211 23:15:44.256] I0211 23:15:35.681550    1313 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0211 23:15:44.257] I0211 23:15:35.782417    1313 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190211T230445.service].
I0211 23:15:44.257] I0211 23:15:36.762265    1313 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0211 23:15:44.257] I0211 23:15:36.809482    1313 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0211 23:15:44.257] I0211 23:15:37.348263    1313 e2e_node_suite_test.go:190] Tests Finished
I0211 23:15:44.257] 
I0211 23:15:44.257] 
I0211 23:15:44.257] Ran 156 of 286 Specs in 634.009 seconds
I0211 23:15:44.257] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 130 Skipped 
I0211 23:15:44.257] 
I0211 23:15:44.257] Ginkgo ran 1 suite in 10m39.207335983s
I0211 23:15:44.258] Test Suite Passed
I0211 23:15:44.258] 
I0211 23:15:44.258] Success Finished Test Suite on Host tmp-node-e2e-f0644ef8-cos-stable-63-10032-71-0
I0211 23:15:44.258] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 440 lines ...
I0211 23:20:59.102] [BeforeEach] [k8s.io] Security Context
I0211 23:20:59.102]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0211 23:20:59.102] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0211 23:20:59.102]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0211 23:20:59.103] Feb 11 23:09:43.489: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-1dd3de80-2e52-11e9-a34b-42010a8a005c" in namespace "security-context-test-5139" to be "success or failure"
I0211 23:20:59.103] Feb 11 23:09:43.491: INFO: Pod "busybox-readonly-true-1dd3de80-2e52-11e9-a34b-42010a8a005c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.388629ms
I0211 23:20:59.103] Feb 11 23:09:45.494: INFO: Pod "busybox-readonly-true-1dd3de80-2e52-11e9-a34b-42010a8a005c": Phase="Failed", Reason="", readiness=false. Elapsed: 2.004934635s
I0211 23:20:59.103] Feb 11 23:09:45.494: INFO: Pod "busybox-readonly-true-1dd3de80-2e52-11e9-a34b-42010a8a005c" satisfied condition "success or failure"
I0211 23:20:59.103] [AfterEach] [k8s.io] Security Context
I0211 23:20:59.103]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:20:59.103] Feb 11 23:09:45.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 23:20:59.104] STEP: Destroying namespace "security-context-test-5139" for this suite.
I0211 23:20:59.104] Feb 11 23:09:51.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 512 lines ...
I0211 23:20:59.166] STEP: verifying the pod is in kubernetes
I0211 23:20:59.167] STEP: updating the pod
I0211 23:20:59.167] Feb 11 23:10:17.720: INFO: Successfully updated pod "pod-update-activedeadlineseconds-30bd08dd-2e52-11e9-83b8-42010a8a005c"
I0211 23:20:59.167] Feb 11 23:10:17.720: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-30bd08dd-2e52-11e9-83b8-42010a8a005c" in namespace "pods-2227" to be "terminated due to deadline exceeded"
I0211 23:20:59.167] Feb 11 23:10:17.722: INFO: Pod "pod-update-activedeadlineseconds-30bd08dd-2e52-11e9-83b8-42010a8a005c": Phase="Running", Reason="", readiness=true. Elapsed: 1.440089ms
I0211 23:20:59.167] Feb 11 23:10:19.724: INFO: Pod "pod-update-activedeadlineseconds-30bd08dd-2e52-11e9-83b8-42010a8a005c": Phase="Running", Reason="", readiness=true. Elapsed: 2.003631806s
I0211 23:20:59.167] Feb 11 23:10:21.726: INFO: Pod "pod-update-activedeadlineseconds-30bd08dd-2e52-11e9-83b8-42010a8a005c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.005944119s
I0211 23:20:59.168] Feb 11 23:10:21.726: INFO: Pod "pod-update-activedeadlineseconds-30bd08dd-2e52-11e9-83b8-42010a8a005c" satisfied condition "terminated due to deadline exceeded"
I0211 23:20:59.168] [AfterEach] [k8s.io] Pods
I0211 23:20:59.168]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:20:59.168] Feb 11 23:10:21.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 23:20:59.168] STEP: Destroying namespace "pods-2227" for this suite.
I0211 23:20:59.168] Feb 11 23:10:27.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 2249 lines ...
I0211 23:20:59.435]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 23:20:59.435] STEP: Creating a kubernetes client
I0211 23:20:59.435] STEP: Building a namespace api object, basename init-container
I0211 23:20:59.435] Feb 11 23:13:38.134: INFO: Skipping waiting for service account
I0211 23:20:59.435] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 23:20:59.435]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 23:20:59.435] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 23:20:59.435]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:20:59.436] STEP: creating the pod
I0211 23:20:59.436] Feb 11 23:13:38.134: INFO: PodSpec: initContainers in spec.initContainers
I0211 23:20:59.436] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 23:20:59.436]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:20:59.436] Feb 11 23:13:41.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0211 23:20:59.436] Feb 11 23:13:49.882: INFO: namespace init-container-1051 deletion completed in 8.046053655s
I0211 23:20:59.437] 
I0211 23:20:59.437] 
I0211 23:20:59.437] • [SLOW TEST:11.755 seconds]
I0211 23:20:59.437] [k8s.io] InitContainer [NodeConformance]
I0211 23:20:59.437] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 23:20:59.437]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 23:20:59.437]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:20:59.437] ------------------------------
I0211 23:20:59.437] [BeforeEach] [sig-network] Networking
I0211 23:20:59.438]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 23:20:59.438] STEP: Creating a kubernetes client
I0211 23:20:59.438] STEP: Building a namespace api object, basename pod-network-test
... skipping 1364 lines ...
I0211 23:20:59.606] STEP: Creating a kubernetes client
I0211 23:20:59.606] STEP: Building a namespace api object, basename container-runtime
I0211 23:20:59.606] Feb 11 23:16:09.115: INFO: Skipping waiting for service account
I0211 23:20:59.606] [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
I0211 23:20:59.607]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:205
I0211 23:20:59.607] STEP: create the container
I0211 23:20:59.607] STEP: wait for the container to reach Failed
I0211 23:20:59.607] STEP: get the container status
I0211 23:20:59.607] STEP: the container should be terminated
I0211 23:20:59.607] STEP: the termination message should be set
I0211 23:20:59.607] STEP: delete the container
I0211 23:20:59.607] [AfterEach] [k8s.io] Container Runtime
I0211 23:20:59.607]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
... skipping 194 lines ...
I0211 23:20:59.631]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 23:20:59.631] STEP: Creating a kubernetes client
I0211 23:20:59.631] STEP: Building a namespace api object, basename init-container
I0211 23:20:59.631] Feb 11 23:15:27.591: INFO: Skipping waiting for service account
I0211 23:20:59.631] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 23:20:59.631]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 23:20:59.632] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 23:20:59.632]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:20:59.632] STEP: creating the pod
I0211 23:20:59.632] Feb 11 23:15:27.591: INFO: PodSpec: initContainers in spec.initContainers
I0211 23:20:59.636] Feb 11 23:16:13.244: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-eaf9524e-2e52-11e9-83b8-42010a8a005c", GenerateName:"", Namespace:"init-container-4905", SelfLink:"/api/v1/namespaces/init-container-4905/pods/pod-init-eaf9524e-2e52-11e9-83b8-42010a8a005c", UID:"eb02dfc3-2e52-11e9-98d0-42010a8a005c", ResourceVersion:"2999", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685523727, loc:(*time.Location)(0xa2319e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"591486305"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000c3bd60), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-f0644ef8-cos-stable-60-9592-84-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0014d6000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000c3beb0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000c3bf50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000cee020), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000cee024)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685523727, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685523727, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685523727, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685523727, loc:(*time.Location)(0xa2319e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.92", PodIP:"10.100.0.150", StartTime:(*v1.Time)(0xc000afc360), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001514070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0015140e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://55377d7c972b0a35a6161dbf894e352b13266e31c5e9247d40f071d50a8bde42"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000afc400), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000afc440), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0211 23:20:59.636] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 23:20:59.636]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:20:59.637] Feb 11 23:16:13.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 23:20:59.637] STEP: Destroying namespace "init-container-4905" for this suite.
I0211 23:20:59.637] Feb 11 23:16:35.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0211 23:20:59.637] Feb 11 23:16:35.314: INFO: namespace init-container-4905 deletion completed in 22.04814546s
I0211 23:20:59.637] 
I0211 23:20:59.637] 
I0211 23:20:59.637] • [SLOW TEST:67.726 seconds]
I0211 23:20:59.637] [k8s.io] InitContainer [NodeConformance]
I0211 23:20:59.637] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 23:20:59.638]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 23:20:59.638]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:20:59.638] ------------------------------
I0211 23:20:59.638] [BeforeEach] [k8s.io] Docker Containers
I0211 23:20:59.638]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 23:20:59.638] STEP: Creating a kubernetes client
I0211 23:20:59.638] STEP: Building a namespace api object, basename containers
... skipping 387 lines ...
I0211 23:20:59.684] Feb 11 23:15:43.254: INFO: Skipping waiting for service account
I0211 23:20:59.685] [It] should not be able to pull from private registry without secret [NodeConformance]
I0211 23:20:59.685]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:302
I0211 23:20:59.685] STEP: create the container
I0211 23:20:59.685] STEP: check the container status
I0211 23:20:59.685] STEP: delete the container
I0211 23:20:59.685] Feb 11 23:20:44.079: INFO: No.1 attempt failed: expected container state: Waiting, got: "Running", retrying...
I0211 23:20:59.685] STEP: create the container
I0211 23:20:59.685] STEP: check the container status
I0211 23:20:59.685] STEP: delete the container
I0211 23:20:59.685] [AfterEach] [k8s.io] Container Runtime
I0211 23:20:59.686]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:20:59.686] Feb 11 23:20:45.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 12 lines ...
I0211 23:20:59.687]       should not be able to pull from private registry without secret [NodeConformance]
I0211 23:20:59.687]       /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:302
I0211 23:20:59.687] ------------------------------
I0211 23:20:59.688] I0211 23:20:51.203662    1305 e2e_node_suite_test.go:185] Stopping node services...
I0211 23:20:59.688] I0211 23:20:51.203688    1305 server.go:258] Kill server "services"
I0211 23:20:59.688] I0211 23:20:51.203698    1305 server.go:295] Killing process 1832 (services) with -TERM
I0211 23:20:59.688] E0211 23:20:51.387830    1305 services.go:89] Failed to stop services: error stopping "services": waitid: no child processes
I0211 23:20:59.688] I0211 23:20:51.387847    1305 server.go:258] Kill server "kubelet"
I0211 23:20:59.688] I0211 23:20:51.396175    1305 services.go:146] Fetching log files...
I0211 23:20:59.688] I0211 23:20:51.396226    1305 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0211 23:20:59.688] I0211 23:20:51.503523    1305 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190211T230739.service].
I0211 23:20:59.689] I0211 23:20:52.354667    1305 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0211 23:20:59.689] I0211 23:20:52.393289    1305 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0211 23:20:59.689] I0211 23:20:52.878534    1305 e2e_node_suite_test.go:190] Tests Finished
I0211 23:20:59.689] 
I0211 23:20:59.689] 
I0211 23:20:59.689] Ran 156 of 286 Specs in 776.741 seconds
I0211 23:20:59.689] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 130 Skipped 
I0211 23:20:59.689] 
I0211 23:20:59.689] Ginkgo ran 1 suite in 13m0.730887089s
I0211 23:20:59.690] Test Suite Passed
I0211 23:20:59.690] 
I0211 23:20:59.690] Success Finished Test Suite on Host tmp-node-e2e-f0644ef8-cos-stable-60-9592-84-0
I0211 23:20:59.690] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 54 lines ...
I0211 23:21:26.447] Validating docker...
I0211 23:21:26.447] DOCKER_VERSION: 18.06.1-ce
I0211 23:21:26.447] DOCKER_GRAPH_DRIVER: overlay2
I0211 23:21:26.447] PASS
I0211 23:21:26.447] I0211 23:07:30.504065    1318 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0211 23:21:26.448] I0211 23:07:30.504098    1318 image_list.go:131] Pre-pulling images with docker [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.1 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.4.1 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:0842734032018be107fa2490c98156992911e3e1f2a21e059ff0105b07dd8e9e k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0211 23:21:26.448] W0211 23:08:05.350374    1318 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 23:21:26.448] W0211 23:08:21.449210    1318 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (2 of 5): exit status 1
I0211 23:21:26.449] W0211 23:08:52.572919    1318 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 as user "root", retrying in 1s (3 of 5): exit status 1
I0211 23:21:26.449] W0211 23:09:54.545191    1318 image_list.go:144] Failed to pull gcr.io/kubernetes-e2e-test-images/hostexec:1.1 as user "root", retrying in 1s (1 of 5): exit status 1
I0211 23:21:26.449] I0211 23:11:37.666618    1318 e2e_node_suite_test.go:219] Locksmithd is masked successfully
I0211 23:21:26.449] I0211 23:11:37.666639    1318 kubelet.go:108] Starting kubelet
I0211 23:21:26.450] I0211 23:11:37.666686    1318 feature_gate.go:226] feature gates: &{map[]}
I0211 23:21:26.450] I0211 23:11:37.689148    1318 server.go:102] Starting server "kubelet" with command "/bin/systemd-run --unit=kubelet-20190211T230715.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20190211T230715/kubelet --kubeconfig /tmp/node-e2e-20190211T230715/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --allow-privileged=true --dynamic-config-dir /tmp/node-e2e-20190211T230715/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20190211T230715/cni/bin --cni-conf-dir /tmp/node-e2e-20190211T230715/cni/net.d --hostname-override tmp-node-e2e-f0644ef8-coreos-beta-1883-1-0-v20180911 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20190211T230715/kubelet-config --cgroups-per-qos=true --cgroup-root=/"
I0211 23:21:26.450] I0211 23:11:37.689200    1318 util.go:44] Running readiness check for service "kubelet"
I0211 23:21:26.451] I0211 23:11:37.689810    1318 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20190211T230715/results/kubelet.log
... skipping 443 lines ...
I0211 23:21:26.528] STEP: Creating a kubernetes client
I0211 23:21:26.528] STEP: Building a namespace api object, basename container-runtime
I0211 23:21:26.528] Feb 11 23:12:16.768: INFO: Skipping waiting for service account
I0211 23:21:26.528] [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
I0211 23:21:26.529]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:205
I0211 23:21:26.529] STEP: create the container
I0211 23:21:26.529] STEP: wait for the container to reach Failed
I0211 23:21:26.529] STEP: get the container status
I0211 23:21:26.529] STEP: the container should be terminated
I0211 23:21:26.529] STEP: the termination message should be set
I0211 23:21:26.529] STEP: delete the container
I0211 23:21:26.529] [AfterEach] [k8s.io] Container Runtime
I0211 23:21:26.529]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
... skipping 730 lines ...
I0211 23:21:26.645]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0211 23:21:26.645] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0211 23:21:26.645]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:138
I0211 23:21:26.646] Feb 11 23:13:26.238: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-a29b0e22-2e52-11e9-ae06-42010a8a005b" in namespace "security-context-test-5923" to be "success or failure"
I0211 23:21:26.646] Feb 11 23:13:26.239: INFO: Pod "busybox-readonly-true-a29b0e22-2e52-11e9-ae06-42010a8a005b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.664229ms
I0211 23:21:26.646] Feb 11 23:13:28.241: INFO: Pod "busybox-readonly-true-a29b0e22-2e52-11e9-ae06-42010a8a005b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003486865s
I0211 23:21:26.646] Feb 11 23:13:30.245: INFO: Pod "busybox-readonly-true-a29b0e22-2e52-11e9-ae06-42010a8a005b": Phase="Failed", Reason="", readiness=false. Elapsed: 4.007362577s
I0211 23:21:26.646] Feb 11 23:13:30.245: INFO: Pod "busybox-readonly-true-a29b0e22-2e52-11e9-ae06-42010a8a005b" satisfied condition "success or failure"
I0211 23:21:26.646] [AfterEach] [k8s.io] Security Context
I0211 23:21:26.646]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:21:26.647] Feb 11 23:13:30.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 23:21:26.647] STEP: Destroying namespace "security-context-test-5923" for this suite.
I0211 23:21:26.647] Feb 11 23:13:36.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1251 lines ...
I0211 23:21:26.826]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 23:21:26.826] STEP: Creating a kubernetes client
I0211 23:21:26.826] STEP: Building a namespace api object, basename init-container
I0211 23:21:26.826] Feb 11 23:14:48.794: INFO: Skipping waiting for service account
I0211 23:21:26.826] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 23:21:26.827]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 23:21:26.827] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 23:21:26.827]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:21:26.827] STEP: creating the pod
I0211 23:21:26.827] Feb 11 23:14:48.794: INFO: PodSpec: initContainers in spec.initContainers
I0211 23:21:26.827] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 23:21:26.827]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:21:26.827] Feb 11 23:14:51.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0211 23:21:26.828] Feb 11 23:14:57.715: INFO: namespace init-container-2149 deletion completed in 6.107503207s
I0211 23:21:26.828] 
I0211 23:21:26.828] 
I0211 23:21:26.828] • [SLOW TEST:8.924 seconds]
I0211 23:21:26.828] [k8s.io] InitContainer [NodeConformance]
I0211 23:21:26.828] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 23:21:26.828]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0211 23:21:26.828]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:21:26.829] ------------------------------
I0211 23:21:26.829] S
I0211 23:21:26.829] ------------------------------
I0211 23:21:26.829] [BeforeEach] [sig-storage] ConfigMap
I0211 23:21:26.829]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
... skipping 489 lines ...
I0211 23:21:26.895] STEP: submitting the pod to kubernetes
I0211 23:21:26.895] STEP: verifying the pod is in kubernetes
I0211 23:21:26.896] STEP: updating the pod
I0211 23:21:26.896] Feb 11 23:15:38.351: INFO: Successfully updated pod "pod-update-activedeadlineseconds-efd9dd91-2e52-11e9-a4a4-42010a8a005b"
I0211 23:21:26.896] Feb 11 23:15:38.351: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-efd9dd91-2e52-11e9-a4a4-42010a8a005b" in namespace "pods-5109" to be "terminated due to deadline exceeded"
I0211 23:21:26.896] Feb 11 23:15:38.352: INFO: Pod "pod-update-activedeadlineseconds-efd9dd91-2e52-11e9-a4a4-42010a8a005b": Phase="Running", Reason="", readiness=true. Elapsed: 1.458499ms
I0211 23:21:26.896] Feb 11 23:15:40.354: INFO: Pod "pod-update-activedeadlineseconds-efd9dd91-2e52-11e9-a4a4-42010a8a005b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.003476374s
I0211 23:21:26.896] Feb 11 23:15:40.354: INFO: Pod "pod-update-activedeadlineseconds-efd9dd91-2e52-11e9-a4a4-42010a8a005b" satisfied condition "terminated due to deadline exceeded"
I0211 23:21:26.897] [AfterEach] [k8s.io] Pods
I0211 23:21:26.897]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:21:26.897] Feb 11 23:15:40.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 23:21:26.897] STEP: Destroying namespace "pods-5109" for this suite.
I0211 23:21:26.897] Feb 11 23:15:46.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 275 lines ...
I0211 23:21:26.943]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 23:21:26.943] STEP: Creating a kubernetes client
I0211 23:21:26.943] STEP: Building a namespace api object, basename init-container
I0211 23:21:26.944] Feb 11 23:15:07.839: INFO: Skipping waiting for service account
I0211 23:21:26.944] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0211 23:21:26.944]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0211 23:21:26.944] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 23:21:26.944]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:21:26.944] STEP: creating the pod
I0211 23:21:26.945] Feb 11 23:15:07.839: INFO: PodSpec: initContainers in spec.initContainers
I0211 23:21:26.949] Feb 11 23:15:50.419: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-df3367dd-2e52-11e9-9b43-42010a8a005b", GenerateName:"", Namespace:"init-container-1188", SelfLink:"/api/v1/namespaces/init-container-1188/pods/pod-init-df3367dd-2e52-11e9-9b43-42010a8a005b", UID:"df3be2c9-2e52-11e9-9593-42010a8a005b", ResourceVersion:"1945", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685523707, loc:(*time.Location)(0xa2319e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"839488440", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000abb970), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-f0644ef8-coreos-beta-1883-1-0-v20180911", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000ccc660), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000abb9e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000abba00)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000abba10), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000abba14)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685523707, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685523707, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685523707, loc:(*time.Location)(0xa2319e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63685523707, loc:(*time.Location)(0xa2319e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.91", PodIP:"10.100.0.95", StartTime:(*v1.Time)(0xc000bee600), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000523a40)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000523ab0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://0231e73b75c003eb92f1b45edca886accc9b402bb06f1941dd29de7bea1c7fce"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000bee660), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000bee6a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0211 23:21:26.949] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0211 23:21:26.950]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:21:26.950] Feb 11 23:15:50.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0211 23:21:26.950] STEP: Destroying namespace "init-container-1188" for this suite.
I0211 23:21:26.950] Feb 11 23:16:12.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0211 23:21:26.950] Feb 11 23:16:12.587: INFO: namespace init-container-1188 deletion completed in 22.1622438s
I0211 23:21:26.951] 
I0211 23:21:26.951] 
I0211 23:21:26.951] • [SLOW TEST:64.751 seconds]
I0211 23:21:26.951] [k8s.io] InitContainer [NodeConformance]
I0211 23:21:26.951] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0211 23:21:26.951]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0211 23:21:26.951]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:704
I0211 23:21:26.952] ------------------------------
I0211 23:21:26.952] [BeforeEach] [sig-storage] ConfigMap
I0211 23:21:26.952]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:155
I0211 23:21:26.952] STEP: Creating a kubernetes client
I0211 23:21:26.952] STEP: Building a namespace api object, basename configmap
... skipping 1879 lines ...
I0211 23:21:27.248] Feb 11 23:15:17.323: INFO: Skipping waiting for service account
I0211 23:21:27.249] [It] should not be able to pull from private registry without secret [NodeConformance]
I0211 23:21:27.249]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:302
I0211 23:21:27.249] STEP: create the container
I0211 23:21:27.249] STEP: check the container status
I0211 23:21:27.249] STEP: delete the container
I0211 23:21:27.249] Feb 11 23:20:18.257: INFO: No.1 attempt failed: expected container state: Waiting, got: "Running", retrying...
I0211 23:21:27.250] STEP: create the container
I0211 23:21:27.250] STEP: check the container status
I0211 23:21:27.250] STEP: delete the container
I0211 23:21:27.250] [AfterEach] [k8s.io] Container Runtime
I0211 23:21:27.250]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:156
I0211 23:21:27.250] Feb 11 23:20:20.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 43 lines ...
I0211 23:21:27.257] I0211 23:21:19.700704    1318 e2e_node_suite_test.go:185] Stopping node services...
I0211 23:21:27.257] I0211 23:21:19.700736    1318 server.go:258] Kill server "services"
I0211 23:21:27.258] I0211 23:21:19.700746    1318 server.go:295] Killing process 2111 (services) with -TERM
I0211 23:21:27.258] I0211 23:21:19.862698    1318 server.go:258] Kill server "kubelet"
I0211 23:21:27.258] I0211 23:21:19.871988    1318 services.go:146] Fetching log files...
I0211 23:21:27.258] I0211 23:21:19.872098    1318 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0211 23:21:27.258] E0211 23:21:20.005994    1318 services.go:158] failed to get "cloud-init.log" from journald: Failed to add filter for units: No data available
I0211 23:21:27.258] , exit status 1
I0211 23:21:27.259] I0211 23:21:20.006024    1318 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0211 23:21:27.259] I0211 23:21:20.021574    1318 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20190211T230715.service].
I0211 23:21:27.259] I0211 23:21:20.040207    1318 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0211 23:21:27.259] I0211 23:21:20.064307    1318 e2e_node_suite_test.go:190] Tests Finished
I0211 23:21:27.259] 
I0211 23:21:27.259] 
I0211 23:21:27.260] Ran 156 of 284 Specs in 829.973 seconds
I0211 23:21:27.260] SUCCESS! -- 156 Passed | 0 Failed | 0 Flaked | 0 Pending | 128 Skipped 
I0211 23:21:27.260] 
I0211 23:21:27.260] Ginkgo ran 1 suite in 13m52.203882319s
I0211 23:21:27.260] Test Suite Passed
I0211 23:21:27.260] 
I0211 23:21:27.260] Success Finished Test Suite on Host tmp-node-e2e-f0644ef8-coreos-beta-1883-1-0-v20180911
I0211 23:21:27.261] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 5 lines ...
W0211 23:21:27.363] 2019/02/11 23:21:27 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml' finished in 20m47.638482642s
W0211 23:21:27.363] 2019/02/11 23:21:27 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0211 23:21:27.363] 2019/02/11 23:21:27 node.go:52: Noop - Node Down()
W0211 23:21:27.363] 2019/02/11 23:21:27 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0211 23:21:27.364] 2019/02/11 23:21:27 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0211 23:21:27.732] 2019/02/11 23:21:27 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 428.220066ms
W0211 23:21:27.733] 2019/02/11 23:21:27 main.go:297: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1]
W0211 23:21:27.736] Traceback (most recent call last):
W0211 23:21:27.736]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 764, in <module>
W0211 23:21:27.737]     main(parse_args())
W0211 23:21:27.737]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 615, in main
W0211 23:21:27.737]     mode.start(runner_args)
W0211 23:21:27.737]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0211 23:21:27.737]     check_env(env, self.command, *args)
W0211 23:21:27.737]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0211 23:21:27.737]     subprocess.check_call(cmd, env=env)
W0211 23:21:27.738]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0211 23:21:27.738]     raise CalledProcessError(retcode, cmd)
W0211 23:21:27.738] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-project=k8s-jkns-pr-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]" --skip="\\[Flaky\\]|\\[Slow\\]|\\[Serial\\]" --flakeAttempts=2', '--timeout=65m', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml')' returned non-zero exit status 1
E0211 23:21:27.751] Command failed
I0211 23:21:27.752] process 492 exited with code 1 after 20.8m
E0211 23:21:27.752] FAIL: pull-kubernetes-node-e2e
I0211 23:21:27.753] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0211 23:21:28.454] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0211 23:21:28.509] process 44987 exited with code 0 after 0.0m
I0211 23:21:28.509] Call:  gcloud config get-value account
I0211 23:21:28.810] process 44999 exited with code 0 after 0.0m
I0211 23:21:28.810] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0211 23:21:28.810] Upload result and artifacts...
I0211 23:21:28.810] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73926/pull-kubernetes-node-e2e/119391
I0211 23:21:28.811] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73926/pull-kubernetes-node-e2e/119391/artifacts
W0211 23:21:29.916] CommandException: One or more URLs matched no objects.
E0211 23:21:30.050] Command failed
I0211 23:21:30.050] process 45011 exited with code 1 after 0.0m
W0211 23:21:30.050] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73926/pull-kubernetes-node-e2e/119391/artifacts not exist yet
I0211 23:21:30.051] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73926/pull-kubernetes-node-e2e/119391/artifacts
I0211 23:21:32.869] process 45153 exited with code 0 after 0.0m
I0211 23:21:32.870] Call:  git rev-parse HEAD
I0211 23:21:32.874] process 45796 exited with code 0 after 0.0m
... skipping 21 lines ...