This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtedyu: Break out of the loop when active endpoint is found
ResultFAILURE
Tests 1 failed / 478 succeeded
Started2019-09-11 19:33
Elapsed20m2s
Revision
Buildergke-prow-ssd-pool-1a225945-fgds
Refs master:001f2cd2
82095:31581e7c
pode7522753-d4ca-11e9-ad08-968d9a0b984c
infra-commit72663f1bb
job-versionv1.17.0-alpha.0.1269+4eb31bf8f2b9a0
pode7522753-d4ca-11e9-ad08-968d9a0b984c
repok8s.io/kubernetes
repo-commit4eb31bf8f2b9a03efceb4fe7b9fcacef55888b0f
repos{u'k8s.io/kubernetes': u'master:001f2cd2b553d06028c8542c8817820ee05d657f,82095:31581e7cb5b26786ab4a9122bb62fb8ce0ec5b79'}
revisionv1.17.0-alpha.0.1269+4eb31bf8f2b9a0

Test Failures


Node Tests 18m34s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 478 Passed Tests

Show 468 Skipped Tests

Error lines from build-log.txt

... skipping 258 lines ...
W0911 19:35:12.168] I0911 19:35:12.049790    4654 build.go:42] Building k8s binaries...
W0911 19:35:12.192] I0911 19:35:12.191904    4654 run_remote.go:567] Creating instance {image:ubuntu-gke-1804-d1703-0-v20181113 imageDesc:ubuntu-gke-1804-d1703-0-v20181113 project:ubuntu-os-gke-cloud resources:{Accelerators:[]} metadata:<nil> machine: tests:[]} with service account "1046294573453-compute@developer.gserviceaccount.com"
W0911 19:35:12.196] I0911 19:35:12.196229    4654 run_remote.go:567] Creating instance {image:cos-stable-63-10032-71-0 imageDesc:cos-stable-63-10032-71-0 project:cos-cloud resources:{Accelerators:[]} metadata:0xc0004b0150 machine: tests:[]} with service account "1046294573453-compute@developer.gserviceaccount.com"
W0911 19:35:12.226] I0911 19:35:12.225717    4654 run_remote.go:567] Creating instance {image:cos-stable-60-9592-84-0 imageDesc:cos-stable-60-9592-84-0 project:cos-cloud resources:{Accelerators:[]} metadata:0xc0004b00e0 machine: tests:[]} with service account "1046294573453-compute@developer.gserviceaccount.com"
W0911 19:35:12.227] I0911 19:35:12.226376    4654 run_remote.go:567] Creating instance {image:coreos-beta-1911-1-1-v20181011 imageDesc:coreos-beta-1911-1-1-v20181011 project:coreos-cloud resources:{Accelerators:[]} metadata:0xc0004b01c0 machine: tests:[]} with service account "1046294573453-compute@developer.gserviceaccount.com"
W0911 19:35:13.169] I0911 19:35:13.169422    4654 run_remote.go:742] Deleting instance ""
W0911 19:35:13.173] E0911 19:35:13.173647    4654 run_remote.go:745] Error deleting instance "": googleapi: got HTTP response code 404 with body: Not Found
I0911 19:35:13.274] 
I0911 19:35:13.274] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0911 19:35:13.275] >                              START TEST                                >
I0911 19:35:13.275] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0911 19:35:13.275] Start Test Suite on Host 
I0911 19:35:13.275] 
I0911 19:35:13.277] Failure Finished Test Suite on Host 
I0911 19:35:13.277] unable to create gce instance with running docker daemon for image coreos-beta-1911-1-1-v20181011.  could not create instance tmp-node-e2e-45789e7f-coreos-beta-1911-1-1-v20181011: API error: googleapi: Error 400: Invalid value for field 'resource.disks[0].initializeParams.sourceImage': 'projects/coreos-cloud/global/images/coreos-beta-1911-1-1-v20181011'. The referenced image resource cannot be found., invalid
I0911 19:35:13.277] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0911 19:35:13.278] <                              FINISH TEST                               <
I0911 19:35:13.278] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0911 19:35:13.278] 
I0911 19:35:21.743] +++ [0911 19:35:21] Building go targets for linux/amd64:
I0911 19:35:21.744]     ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
... skipping 2255 lines ...
I0911 19:50:40.643] [BeforeEach] [k8s.io] Security Context
I0911 19:50:40.643]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
I0911 19:50:40.644] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0911 19:50:40.644]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
I0911 19:50:40.644] Sep 11 19:44:28.330: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-29d15ff7-4164-406a-942d-a6d868a913cb" in namespace "security-context-test-3362" to be "success or failure"
I0911 19:50:40.645] Sep 11 19:44:28.331: INFO: Pod "busybox-readonly-true-29d15ff7-4164-406a-942d-a6d868a913cb": Phase="Pending", Reason="", readiness=false. Elapsed: 1.200421ms
I0911 19:50:40.645] Sep 11 19:44:30.333: INFO: Pod "busybox-readonly-true-29d15ff7-4164-406a-942d-a6d868a913cb": Phase="Failed", Reason="", readiness=false. Elapsed: 2.002897084s
I0911 19:50:40.645] Sep 11 19:44:30.333: INFO: Pod "busybox-readonly-true-29d15ff7-4164-406a-942d-a6d868a913cb" satisfied condition "success or failure"
I0911 19:50:40.646] [AfterEach] [k8s.io] Security Context
I0911 19:50:40.646]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:50:40.646] Sep 11 19:44:30.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:50:40.646] STEP: Destroying namespace "security-context-test-3362" for this suite.
I0911 19:50:40.647] Sep 11 19:44:36.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 466 lines ...
I0911 19:50:40.753]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:50:40.753] STEP: Creating a kubernetes client
I0911 19:50:40.753] STEP: Building a namespace api object, basename init-container
I0911 19:50:40.753] Sep 11 19:44:56.677: INFO: Skipping waiting for service account
I0911 19:50:40.753] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:50:40.754]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0911 19:50:40.754] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0911 19:50:40.754]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:50:40.754] STEP: creating the pod
I0911 19:50:40.754] Sep 11 19:44:56.677: INFO: PodSpec: initContainers in spec.initContainers
I0911 19:50:40.754] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:50:40.755]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:50:40.755] Sep 11 19:44:58.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0911 19:50:40.755] Sep 11 19:45:04.626: INFO: namespace init-container-651 deletion completed in 6.048099107s
I0911 19:50:40.755] 
I0911 19:50:40.755] 
I0911 19:50:40.756] • [SLOW TEST:7.952 seconds]
I0911 19:50:40.756] [k8s.io] InitContainer [NodeConformance]
I0911 19:50:40.756] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0911 19:50:40.756]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0911 19:50:40.756]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:50:40.756] ------------------------------
I0911 19:50:40.757] [BeforeEach] [sig-storage] Downward API volume
I0911 19:50:40.757]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:50:40.757] STEP: Creating a kubernetes client
I0911 19:50:40.757] STEP: Building a namespace api object, basename downward-api
... skipping 511 lines ...
I0911 19:50:40.856] STEP: Creating a kubernetes client
I0911 19:50:40.857] STEP: Building a namespace api object, basename container-runtime
I0911 19:50:40.857] Sep 11 19:45:49.212: INFO: Skipping waiting for service account
I0911 19:50:40.857] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0911 19:50:40.857]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:50:40.857] STEP: create the container
I0911 19:50:40.857] STEP: wait for the container to reach Failed
I0911 19:50:40.858] STEP: get the container status
I0911 19:50:40.858] STEP: the container should be terminated
I0911 19:50:40.858] STEP: the termination message should be set
I0911 19:50:40.858] Sep 11 19:45:50.276: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0911 19:50:40.858] STEP: delete the container
I0911 19:50:40.858] [AfterEach] [k8s.io] Container Runtime
... skipping 82 lines ...
I0911 19:50:40.870]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:50:40.870] STEP: Creating a kubernetes client
I0911 19:50:40.870] STEP: Building a namespace api object, basename init-container
I0911 19:50:40.870] Sep 11 19:44:50.255: INFO: Skipping waiting for service account
I0911 19:50:40.871] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:50:40.871]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0911 19:50:40.871] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0911 19:50:40.871]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:50:40.871] STEP: creating the pod
I0911 19:50:40.871] Sep 11 19:44:50.255: INFO: PodSpec: initContainers in spec.initContainers
I0911 19:50:40.878] Sep 11 19:45:39.351: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bd225900-4959-4abe-8ef6-dd9786323c78", GenerateName:"", Namespace:"init-container-9248", SelfLink:"/api/v1/namespaces/init-container-9248/pods/pod-init-bd225900-4959-4abe-8ef6-dd9786323c78", UID:"03452571-6616-4c4c-b545-7ca8d9fb4fb0", ResourceVersion:"1992", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63703827890, loc:(*time.Location)(0xbe81a00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"255156049"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ff2a60), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-45789e7f-cos-stable-60-9592-84-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011fab40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ff2ad0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ff2af0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000ff2b00), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000ff2b04), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703827890, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703827890, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703827890, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703827890, loc:(*time.Location)(0xbe81a00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.4", PodIP:"10.100.0.92", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.100.0.92"}}, StartTime:(*v1.Time)(0xc0004fd540), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00086a620)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00086a690)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://dcedcb46b88d344dbfc8b6646e68d7169b211cb0d77e1b377ed2b2cddc952150", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0004fd580), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0004fd5c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc000ff2bdc)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
I0911 19:50:40.878] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:50:40.878]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:50:40.879] Sep 11 19:45:39.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:50:40.879] STEP: Destroying namespace "init-container-9248" for this suite.
I0911 19:50:40.879] Sep 11 19:46:07.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0911 19:50:40.879] Sep 11 19:46:07.403: INFO: namespace init-container-9248 deletion completed in 28.046186953s
I0911 19:50:40.879] 
I0911 19:50:40.879] 
I0911 19:50:40.879] • [SLOW TEST:77.152 seconds]
I0911 19:50:40.880] [k8s.io] InitContainer [NodeConformance]
I0911 19:50:40.880] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0911 19:50:40.880]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0911 19:50:40.880]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:50:40.880] ------------------------------
I0911 19:50:40.880] [BeforeEach] [k8s.io] MirrorPod
I0911 19:50:40.880]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:50:40.881] STEP: Creating a kubernetes client
I0911 19:50:40.881] STEP: Building a namespace api object, basename mirror-pod
... skipping 1165 lines ...
I0911 19:50:41.170] STEP: verifying the pod is in kubernetes
I0911 19:50:41.170] STEP: updating the pod
I0911 19:50:41.170] Sep 11 19:47:41.257: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1f79adef-a223-40cb-a749-297649184286"
I0911 19:50:41.171] Sep 11 19:47:41.257: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1f79adef-a223-40cb-a749-297649184286" in namespace "pods-886" to be "terminated due to deadline exceeded"
I0911 19:50:41.171] Sep 11 19:47:41.258: INFO: Pod "pod-update-activedeadlineseconds-1f79adef-a223-40cb-a749-297649184286": Phase="Running", Reason="", readiness=true. Elapsed: 988.115µs
I0911 19:50:41.171] Sep 11 19:47:43.260: INFO: Pod "pod-update-activedeadlineseconds-1f79adef-a223-40cb-a749-297649184286": Phase="Running", Reason="", readiness=true. Elapsed: 2.002675606s
I0911 19:50:41.172] Sep 11 19:47:45.261: INFO: Pod "pod-update-activedeadlineseconds-1f79adef-a223-40cb-a749-297649184286": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.003945875s
I0911 19:50:41.172] Sep 11 19:47:45.261: INFO: Pod "pod-update-activedeadlineseconds-1f79adef-a223-40cb-a749-297649184286" satisfied condition "terminated due to deadline exceeded"
I0911 19:50:41.173] [AfterEach] [k8s.io] Pods
I0911 19:50:41.173]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:50:41.174] Sep 11 19:47:45.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:50:41.174] STEP: Destroying namespace "pods-886" for this suite.
I0911 19:50:41.174] Sep 11 19:47:51.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 681 lines ...
I0911 19:50:41.355]   should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
I0911 19:50:41.356]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:50:41.356] ------------------------------
I0911 19:50:41.356] I0911 19:50:33.097608    1305 e2e_node_suite_test.go:196] Stopping node services...
I0911 19:50:41.356] I0911 19:50:33.097650    1305 server.go:257] Kill server "services"
I0911 19:50:41.357] I0911 19:50:33.097665    1305 server.go:294] Killing process 1932 (services) with -TERM
I0911 19:50:41.357] E0911 19:50:33.209760    1305 services.go:88] Failed to stop services: error stopping "services": waitid: no child processes
I0911 19:50:41.357] I0911 19:50:33.209781    1305 server.go:257] Kill server "kubelet"
I0911 19:50:41.357] I0911 19:50:33.219028    1305 services.go:147] Fetching log files...
I0911 19:50:41.358] I0911 19:50:33.219105    1305 services.go:156] Get log file "kern.log" with journalctl command [-k].
I0911 19:50:41.358] I0911 19:50:33.281114    1305 services.go:156] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0911 19:50:41.358] I0911 19:50:33.740959    1305 services.go:156] Get log file "docker.log" with journalctl command [-u docker].
I0911 19:50:41.359] I0911 19:50:33.769046    1305 services.go:156] Get log file "kubelet.log" with journalctl command [-u kubelet-20190911T193949.service].
I0911 19:50:41.359] I0911 19:50:34.612935    1305 e2e_node_suite_test.go:201] Tests Finished
I0911 19:50:41.359] 
I0911 19:50:41.359] 
I0911 19:50:41.359] Ran 157 of 313 Specs in 629.748 seconds
I0911 19:50:41.360] SUCCESS! -- 157 Passed | 0 Failed | 0 Flaked | 0 Pending | 156 Skipped
I0911 19:50:41.360] 
I0911 19:50:41.360] 
I0911 19:50:41.360] Ginkgo ran 1 suite in 10m33.439675399s
I0911 19:50:41.360] Test Suite Passed
I0911 19:50:41.360] 
I0911 19:50:41.361] Success Finished Test Suite on Host tmp-node-e2e-45789e7f-cos-stable-60-9592-84-0
... skipping 2221 lines ...
I0911 19:51:13.059] [BeforeEach] [k8s.io] Security Context
I0911 19:51:13.059]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
I0911 19:51:13.059] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0911 19:51:13.059]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
I0911 19:51:13.060] Sep 11 19:44:50.411: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-cbaf46df-6162-4448-a259-179c55421cee" in namespace "security-context-test-6504" to be "success or failure"
I0911 19:51:13.060] Sep 11 19:44:50.417: INFO: Pod "busybox-readonly-true-cbaf46df-6162-4448-a259-179c55421cee": Phase="Pending", Reason="", readiness=false. Elapsed: 5.517966ms
I0911 19:51:13.060] Sep 11 19:44:52.422: INFO: Pod "busybox-readonly-true-cbaf46df-6162-4448-a259-179c55421cee": Phase="Failed", Reason="", readiness=false. Elapsed: 2.010911175s
I0911 19:51:13.061] Sep 11 19:44:52.422: INFO: Pod "busybox-readonly-true-cbaf46df-6162-4448-a259-179c55421cee" satisfied condition "success or failure"
I0911 19:51:13.061] [AfterEach] [k8s.io] Security Context
I0911 19:51:13.061]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:51:13.061] Sep 11 19:44:52.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:51:13.061] STEP: Destroying namespace "security-context-test-6504" for this suite.
I0911 19:51:13.062] Sep 11 19:44:58.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 440 lines ...
I0911 19:51:13.154]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:51:13.154] STEP: Creating a kubernetes client
I0911 19:51:13.154] STEP: Building a namespace api object, basename init-container
I0911 19:51:13.154] Sep 11 19:45:16.113: INFO: Skipping waiting for service account
I0911 19:51:13.155] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:51:13.155]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0911 19:51:13.155] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0911 19:51:13.155]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:51:13.155] STEP: creating the pod
I0911 19:51:13.156] Sep 11 19:45:16.113: INFO: PodSpec: initContainers in spec.initContainers
I0911 19:51:13.156] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:51:13.156]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:51:13.156] Sep 11 19:45:18.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0911 19:51:13.157] Sep 11 19:45:24.843: INFO: namespace init-container-2155 deletion completed in 6.074108515s
I0911 19:51:13.157] 
I0911 19:51:13.157] 
I0911 19:51:13.158] • [SLOW TEST:8.733 seconds]
I0911 19:51:13.158] [k8s.io] InitContainer [NodeConformance]
I0911 19:51:13.158] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0911 19:51:13.158]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0911 19:51:13.159]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:51:13.159] ------------------------------
I0911 19:51:13.159] [BeforeEach] [sig-storage] Projected downwardAPI
I0911 19:51:13.159]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:51:13.159] STEP: Creating a kubernetes client
I0911 19:51:13.159] STEP: Building a namespace api object, basename projected
... skipping 435 lines ...
I0911 19:51:13.251]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:51:13.251] STEP: Creating a kubernetes client
I0911 19:51:13.251] STEP: Building a namespace api object, basename init-container
I0911 19:51:13.252] Sep 11 19:45:11.748: INFO: Skipping waiting for service account
I0911 19:51:13.252] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:51:13.252]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0911 19:51:13.252] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0911 19:51:13.253]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:51:13.253] STEP: creating the pod
I0911 19:51:13.253] Sep 11 19:45:11.748: INFO: PodSpec: initContainers in spec.initContainers
I0911 19:51:13.260] Sep 11 19:45:53.378: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a5fd0a9f-c79e-480d-8911-cb03962481b6", GenerateName:"", Namespace:"init-container-514", SelfLink:"/api/v1/namespaces/init-container-514/pods/pod-init-a5fd0a9f-c79e-480d-8911-cb03962481b6", UID:"3d5d89a7-e9b4-4001-a43a-241ed45f7230", ResourceVersion:"1873", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63703827911, loc:(*time.Location)(0xbe81a00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"748467725"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00034b6b0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-45789e7f-cos-stable-63-10032-71-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000dd09c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0002f63d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0002f6430)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0002f6450), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0002f6454), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703827911, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703827911, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703827911, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703827911, loc:(*time.Location)(0xbe81a00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.3", PodIP:"10.100.0.93", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.100.0.93"}}, StartTime:(*v1.Time)(0xc000629840), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000d42850)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000d428c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://c2f23b5222ab857dc82fe96d71dac579fa6930601b83b12b8e60e580fa17ac54", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000629880), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000629900), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0002f6d2c)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
I0911 19:51:13.260] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:51:13.260]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:51:13.261] Sep 11 19:45:53.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:51:13.261] STEP: Destroying namespace "init-container-514" for this suite.
I0911 19:51:13.261] Sep 11 19:46:21.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0911 19:51:13.261] Sep 11 19:46:21.562: INFO: namespace init-container-514 deletion completed in 28.181385296s
I0911 19:51:13.261] 
I0911 19:51:13.261] 
I0911 19:51:13.262] • [SLOW TEST:69.817 seconds]
I0911 19:51:13.262] [k8s.io] InitContainer [NodeConformance]
I0911 19:51:13.262] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0911 19:51:13.262]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0911 19:51:13.262]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:51:13.262] ------------------------------
I0911 19:51:13.263] S
I0911 19:51:13.263] ------------------------------
I0911 19:51:13.263] [BeforeEach] [k8s.io] Pods
I0911 19:51:13.263]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 36 lines ...
I0911 19:51:13.270] STEP: Creating a kubernetes client
I0911 19:51:13.270] STEP: Building a namespace api object, basename container-runtime
I0911 19:51:13.270] Sep 11 19:46:19.368: INFO: Skipping waiting for service account
I0911 19:51:13.270] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0911 19:51:13.270]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:51:13.271] STEP: create the container
I0911 19:51:13.271] STEP: wait for the container to reach Failed
I0911 19:51:13.271] STEP: get the container status
I0911 19:51:13.271] STEP: the container should be terminated
I0911 19:51:13.271] STEP: the termination message should be set
I0911 19:51:13.272] Sep 11 19:46:20.432: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0911 19:51:13.272] STEP: delete the container
I0911 19:51:13.272] [AfterEach] [k8s.io] Container Runtime
... skipping 1213 lines ...
I0911 19:51:13.482] STEP: verifying the pod is in kubernetes
I0911 19:51:13.482] STEP: updating the pod
I0911 19:51:13.482] Sep 11 19:48:05.552: INFO: Successfully updated pod "pod-update-activedeadlineseconds-522c63b8-93b0-494a-999d-21a185dcf81f"
I0911 19:51:13.483] Sep 11 19:48:05.552: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-522c63b8-93b0-494a-999d-21a185dcf81f" in namespace "pods-5109" to be "terminated due to deadline exceeded"
I0911 19:51:13.483] Sep 11 19:48:05.554: INFO: Pod "pod-update-activedeadlineseconds-522c63b8-93b0-494a-999d-21a185dcf81f": Phase="Running", Reason="", readiness=true. Elapsed: 1.439032ms
I0911 19:51:13.483] Sep 11 19:48:07.555: INFO: Pod "pod-update-activedeadlineseconds-522c63b8-93b0-494a-999d-21a185dcf81f": Phase="Running", Reason="", readiness=true. Elapsed: 2.003110816s
I0911 19:51:13.483] Sep 11 19:48:09.557: INFO: Pod "pod-update-activedeadlineseconds-522c63b8-93b0-494a-999d-21a185dcf81f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.005000244s
I0911 19:51:13.484] Sep 11 19:48:09.557: INFO: Pod "pod-update-activedeadlineseconds-522c63b8-93b0-494a-999d-21a185dcf81f" satisfied condition "terminated due to deadline exceeded"
I0911 19:51:13.484] [AfterEach] [k8s.io] Pods
I0911 19:51:13.484]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:51:13.484] Sep 11 19:48:09.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:51:13.484] STEP: Destroying namespace "pods-5109" for this suite.
I0911 19:51:13.485] Sep 11 19:48:15.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 787 lines ...
I0911 19:51:13.638]   should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
I0911 19:51:13.638]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:51:13.639] ------------------------------
I0911 19:51:13.639] I0911 19:51:05.306315    1289 e2e_node_suite_test.go:196] Stopping node services...
I0911 19:51:13.639] I0911 19:51:05.306369    1289 server.go:257] Kill server "services"
I0911 19:51:13.639] I0911 19:51:05.306382    1289 server.go:294] Killing process 1891 (services) with -TERM
I0911 19:51:13.639] E0911 19:51:05.421533    1289 services.go:88] Failed to stop services: error stopping "services": waitid: no child processes
I0911 19:51:13.640] I0911 19:51:05.421549    1289 server.go:257] Kill server "kubelet"
I0911 19:51:13.640] I0911 19:51:05.429644    1289 services.go:147] Fetching log files...
I0911 19:51:13.640] I0911 19:51:05.429719    1289 services.go:156] Get log file "kern.log" with journalctl command [-k].
I0911 19:51:13.640] I0911 19:51:05.554656    1289 services.go:156] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0911 19:51:13.640] I0911 19:51:06.174582    1289 services.go:156] Get log file "docker.log" with journalctl command [-u docker].
I0911 19:51:13.641] I0911 19:51:06.208732    1289 services.go:156] Get log file "kubelet.log" with journalctl command [-u kubelet-20190911T193949.service].
I0911 19:51:13.641] I0911 19:51:07.340751    1289 e2e_node_suite_test.go:201] Tests Finished
I0911 19:51:13.641] 
I0911 19:51:13.641] 
I0911 19:51:13.641] Ran 157 of 313 Specs in 662.825 seconds
I0911 19:51:13.642] SUCCESS! -- 157 Passed | 0 Failed | 0 Flaked | 0 Pending | 156 Skipped
I0911 19:51:13.642] 
I0911 19:51:13.642] 
I0911 19:51:13.642] Ginkgo ran 1 suite in 11m6.010529716s
I0911 19:51:13.642] Test Suite Passed
I0911 19:51:13.642] 
I0911 19:51:13.642] Success Finished Test Suite on Host tmp-node-e2e-45789e7f-cos-stable-63-10032-71-0
... skipping 1355 lines ...
I0911 19:52:47.379] [BeforeEach] [k8s.io] Security Context
I0911 19:52:47.379]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
I0911 19:52:47.379] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0911 19:52:47.380]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
I0911 19:52:47.380] Sep 11 19:43:06.221: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-b1b4252d-7c4e-4ec9-9546-05fff8e27d1e" in namespace "security-context-test-9364" to be "success or failure"
I0911 19:52:47.380] Sep 11 19:43:06.234: INFO: Pod "busybox-readonly-true-b1b4252d-7c4e-4ec9-9546-05fff8e27d1e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.633113ms
I0911 19:52:47.380] Sep 11 19:43:08.236: INFO: Pod "busybox-readonly-true-b1b4252d-7c4e-4ec9-9546-05fff8e27d1e": Phase="Failed", Reason="", readiness=false. Elapsed: 2.014303887s
I0911 19:52:47.381] Sep 11 19:43:08.236: INFO: Pod "busybox-readonly-true-b1b4252d-7c4e-4ec9-9546-05fff8e27d1e" satisfied condition "success or failure"
I0911 19:52:47.381] [AfterEach] [k8s.io] Security Context
I0911 19:52:47.381]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:52:47.381] Sep 11 19:43:08.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:52:47.382] STEP: Destroying namespace "security-context-test-9364" for this suite.
I0911 19:52:47.382] Sep 11 19:43:14.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1614 lines ...
I0911 19:52:47.694] STEP: verifying the pod is in kubernetes
I0911 19:52:47.695] STEP: updating the pod
I0911 19:52:47.695] Sep 11 19:45:48.597: INFO: Successfully updated pod "pod-update-activedeadlineseconds-327621ab-feed-4114-b90e-bf025a6fa887"
I0911 19:52:47.695] Sep 11 19:45:48.597: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-327621ab-feed-4114-b90e-bf025a6fa887" in namespace "pods-4748" to be "terminated due to deadline exceeded"
I0911 19:52:47.695] Sep 11 19:45:48.598: INFO: Pod "pod-update-activedeadlineseconds-327621ab-feed-4114-b90e-bf025a6fa887": Phase="Running", Reason="", readiness=true. Elapsed: 1.477984ms
I0911 19:52:47.696] Sep 11 19:45:50.600: INFO: Pod "pod-update-activedeadlineseconds-327621ab-feed-4114-b90e-bf025a6fa887": Phase="Running", Reason="", readiness=true. Elapsed: 2.003227246s
I0911 19:52:47.696] Sep 11 19:45:52.602: INFO: Pod "pod-update-activedeadlineseconds-327621ab-feed-4114-b90e-bf025a6fa887": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.005106772s
I0911 19:52:47.696] Sep 11 19:45:52.602: INFO: Pod "pod-update-activedeadlineseconds-327621ab-feed-4114-b90e-bf025a6fa887" satisfied condition "terminated due to deadline exceeded"
I0911 19:52:47.697] [AfterEach] [k8s.io] Pods
I0911 19:52:47.697]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:52:47.697] Sep 11 19:45:52.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:52:47.697] STEP: Destroying namespace "pods-4748" for this suite.
I0911 19:52:47.697] Sep 11 19:45:58.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1446 lines ...
I0911 19:52:47.992] Sep 11 19:43:12.102: INFO: Skipping waiting for service account
I0911 19:52:47.992] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0911 19:52:47.992]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:68
I0911 19:52:47.993] STEP: create the container
I0911 19:52:47.993] STEP: check the container status
I0911 19:52:47.993] STEP: delete the container
I0911 19:52:47.993] Sep 11 19:48:12.958: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0911 19:52:47.993] STEP: create the container
I0911 19:52:47.993] STEP: check the container status
I0911 19:52:47.993] STEP: delete the container
I0911 19:52:47.994] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0911 19:52:47.994]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:52:47.994] Sep 11 19:48:15.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 404 lines ...
I0911 19:52:48.063] STEP: Creating a kubernetes client
I0911 19:52:48.063] STEP: Building a namespace api object, basename container-runtime
I0911 19:52:48.063] Sep 11 19:48:39.360: INFO: Skipping waiting for service account
I0911 19:52:48.063] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0911 19:52:48.063]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:52:48.063] STEP: create the container
I0911 19:52:48.064] STEP: wait for the container to reach Failed
I0911 19:52:48.064] STEP: get the container status
I0911 19:52:48.064] STEP: the container should be terminated
I0911 19:52:48.064] STEP: the termination message should be set
I0911 19:52:48.064] Sep 11 19:48:40.376: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0911 19:52:48.064] STEP: delete the container
I0911 19:52:48.064] [AfterEach] [k8s.io] Container Runtime
... skipping 187 lines ...
I0911 19:52:48.129]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:52:48.129] STEP: Creating a kubernetes client
I0911 19:52:48.129] STEP: Building a namespace api object, basename init-container
I0911 19:52:48.129] Sep 11 19:48:53.794: INFO: Skipping waiting for service account
I0911 19:52:48.129] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:52:48.130]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0911 19:52:48.130] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0911 19:52:48.130]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:52:48.130] STEP: creating the pod
I0911 19:52:48.130] Sep 11 19:48:53.794: INFO: PodSpec: initContainers in spec.initContainers
I0911 19:52:48.131] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:52:48.131]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:52:48.131] Sep 11 19:48:56.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0911 19:52:48.132] Sep 11 19:49:02.518: INFO: namespace init-container-9884 deletion completed in 6.053908434s
I0911 19:52:48.132] 
I0911 19:52:48.132] 
I0911 19:52:48.132] • [SLOW TEST:8.735 seconds]
I0911 19:52:48.132] [k8s.io] InitContainer [NodeConformance]
I0911 19:52:48.132] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0911 19:52:48.133]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0911 19:52:48.133]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:52:48.133] ------------------------------
I0911 19:52:48.133] [BeforeEach] [k8s.io] Probing container
I0911 19:52:48.133]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:52:48.133] STEP: Creating a kubernetes client
I0911 19:52:48.134] STEP: Building a namespace api object, basename container-probe
... skipping 104 lines ...
I0911 19:52:48.156]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:52:48.156] STEP: Creating a kubernetes client
I0911 19:52:48.157] STEP: Building a namespace api object, basename init-container
I0911 19:52:48.157] Sep 11 19:48:46.031: INFO: Skipping waiting for service account
I0911 19:52:48.157] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:52:48.157]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0911 19:52:48.157] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0911 19:52:48.158]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:52:48.158] STEP: creating the pod
I0911 19:52:48.158] Sep 11 19:48:46.031: INFO: PodSpec: initContainers in spec.initContainers
I0911 19:52:48.168] Sep 11 19:49:29.805: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b847b19a-7110-4550-be19-7e3a00ee9f63", GenerateName:"", Namespace:"init-container-9707", SelfLink:"/api/v1/namespaces/init-container-9707/pods/pod-init-b847b19a-7110-4550-be19-7e3a00ee9f63", UID:"3a64e8ea-99f4-413b-9d12-f59a63b450d5", ResourceVersion:"3421", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63703828126, loc:(*time.Location)(0xbe81a00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"31900956"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0012308b0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-45789e7f-ubuntu-gke-1804-d1703-0-v20181113", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001202cc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001230920)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001230940)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001230950), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001230954), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828126, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828126, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828126, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828126, loc:(*time.Location)(0xbe81a00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.9", PodIP:"10.100.0.179", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.100.0.179"}}, StartTime:(*v1.Time)(0xc0008799a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003e83f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003e8460)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://571c9fcd0100294d453a57cf4306ca213be53f931cff5c510c7d1bfb50decc33", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0008799c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0008799e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc001230a44)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
I0911 19:52:48.168] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:52:48.168]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:52:48.169] Sep 11 19:49:29.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:52:48.169] STEP: Destroying namespace "init-container-9707" for this suite.
I0911 19:52:48.169] Sep 11 19:49:57.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0911 19:52:48.169] Sep 11 19:49:57.876: INFO: namespace init-container-9707 deletion completed in 28.051629309s
I0911 19:52:48.169] 
I0911 19:52:48.170] 
I0911 19:52:48.170] • [SLOW TEST:71.849 seconds]
I0911 19:52:48.170] [k8s.io] InitContainer [NodeConformance]
I0911 19:52:48.170] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0911 19:52:48.170]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0911 19:52:48.171]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:52:48.171] ------------------------------
I0911 19:52:48.172] [BeforeEach] [k8s.io] Probing container
I0911 19:52:48.172]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:52:48.172] STEP: Creating a kubernetes client
I0911 19:52:48.172] STEP: Building a namespace api object, basename container-probe
... skipping 31 lines ...
I0911 19:52:48.179] I0911 19:52:40.866235    2618 services.go:156] Get log file "kubelet.log" with journalctl command [-u kubelet-20190911T193949.service].
I0911 19:52:48.179] I0911 19:52:41.401090    2618 services.go:156] Get log file "kern.log" with journalctl command [-k].
I0911 19:52:48.180] I0911 19:52:41.424232    2618 e2e_node_suite_test.go:201] Tests Finished
I0911 19:52:48.180] 
I0911 19:52:48.180] 
I0911 19:52:48.180] Ran 157 of 313 Specs in 757.090 seconds
I0911 19:52:48.180] SUCCESS! -- 157 Passed | 0 Failed | 0 Flaked | 0 Pending | 156 Skipped
I0911 19:52:48.180] 
I0911 19:52:48.181] 
I0911 19:52:48.181] Ginkgo ran 1 suite in 12m38.846191246s
I0911 19:52:48.181] Test Suite Passed
I0911 19:52:48.181] 
I0911 19:52:48.181] Success Finished Test Suite on Host tmp-node-e2e-45789e7f-ubuntu-gke-1804-d1703-0-v20181113
... skipping 6 lines ...
W0911 19:52:48.384] 2019/09/11 19:52:48 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml' finished in 18m34.079463354s
W0911 19:52:48.385] 2019/09/11 19:52:48 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0911 19:52:48.385] 2019/09/11 19:52:48 node.go:52: Noop - Node Down()
W0911 19:52:48.385] 2019/09/11 19:52:48 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0911 19:52:48.386] 2019/09/11 19:52:48 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0911 19:52:50.148] 2019/09/11 19:52:50 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 1.763827782s
W0911 19:52:50.149] 2019/09/11 19:52:50 main.go:319: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1]
W0911 19:52:50.150] Traceback (most recent call last):
W0911 19:52:50.151]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0911 19:52:50.155]     main(parse_args())
W0911 19:52:50.155]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0911 19:52:50.156]     mode.start(runner_args)
W0911 19:52:50.156]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0911 19:52:50.156]     check_env(env, self.command, *args)
W0911 19:52:50.157]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0911 19:52:50.157]     subprocess.check_call(cmd, env=env)
W0911 19:52:50.158]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0911 19:52:50.158]     raise CalledProcessError(retcode, cmd)
W0911 19:52:50.159] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-project=k8s-jkns-pr-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]" --skip="\\[Flaky\\]|\\[Slow\\]|\\[Serial\\]" --flakeAttempts=2', '--timeout=65m', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml')' returned non-zero exit status 1
E0911 19:52:50.165] Command failed
I0911 19:52:50.166] process 489 exited with code 1 after 18.6m
E0911 19:52:50.166] FAIL: pull-kubernetes-node-e2e
I0911 19:52:50.166] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0911 19:52:50.943] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0911 19:52:51.009] process 39099 exited with code 0 after 0.0m
I0911 19:52:51.010] Call:  gcloud config get-value account
I0911 19:52:51.440] process 39111 exited with code 0 after 0.0m
I0911 19:52:51.440] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0911 19:52:51.440] Upload result and artifacts...
I0911 19:52:51.440] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/82095/pull-kubernetes-node-e2e/1171869236583206912
I0911 19:52:51.441] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/82095/pull-kubernetes-node-e2e/1171869236583206912/artifacts
W0911 19:52:52.891] CommandException: One or more URLs matched no objects.
E0911 19:52:53.039] Command failed
I0911 19:52:53.039] process 39123 exited with code 1 after 0.0m
W0911 19:52:53.039] Remote dir gs://kubernetes-jenkins/pr-logs/pull/82095/pull-kubernetes-node-e2e/1171869236583206912/artifacts not exist yet
I0911 19:52:53.040] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/82095/pull-kubernetes-node-e2e/1171869236583206912/artifacts
I0911 19:52:57.030] process 39265 exited with code 0 after 0.1m
I0911 19:52:57.031] Call:  git rev-parse HEAD
I0911 19:52:57.036] process 39907 exited with code 0 after 0.0m
... skipping 21 lines ...