This job view page is being replaced by Spyglass soon. Check out the new job view.
PRlmdaly: Added OWNERS file for Topology Manager
ResultFAILURE
Tests 1 failed / 478 succeeded
Started2019-09-11 19:40
Elapsed18m40s
Revision
Buildergke-prow-ssd-pool-1a225945-xk3x
Refs master:001f2cd2
81793:fbccf25e
pode8cec74a-d4cb-11e9-b767-f2268b853d97
infra-commit72663f1bb
job-versionv1.17.0-alpha.0.1269+29239ebd9c9172
pode8cec74a-d4cb-11e9-b767-f2268b853d97
repok8s.io/kubernetes
repo-commit29239ebd9c917243c0ceeb6af65fa9edca435b45
repos{u'k8s.io/kubernetes': u'master:001f2cd2b553d06028c8542c8817820ee05d657f,81793:fbccf25e29194ebcde0ebbabbcbd0e9d14bedb8e'}
revisionv1.17.0-alpha.0.1269+29239ebd9c9172

Test Failures


Node Tests 17m11s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 478 Passed Tests

Show 468 Skipped Tests

Error lines from build-log.txt

... skipping 259 lines ...
I0911 19:42:40.877] make[1]: Entering directory '/go/src/k8s.io/kubernetes'
W0911 19:42:40.977] I0911 19:42:40.904769    4687 run_remote.go:567] Creating instance {image:cos-stable-60-9592-84-0 imageDesc:cos-stable-60-9592-84-0 project:cos-cloud resources:{Accelerators:[]} metadata:0xc0007243f0 machine: tests:[]} with service account "1046294573453-compute@developer.gserviceaccount.com"
W0911 19:42:40.978] I0911 19:42:40.925068    4687 run_remote.go:567] Creating instance {image:cos-stable-63-10032-71-0 imageDesc:cos-stable-63-10032-71-0 project:cos-cloud resources:{Accelerators:[]} metadata:0xc0007240e0 machine: tests:[]} with service account "1046294573453-compute@developer.gserviceaccount.com"
W0911 19:42:40.979] I0911 19:42:40.942635    4687 run_remote.go:567] Creating instance {image:coreos-beta-1911-1-1-v20181011 imageDesc:coreos-beta-1911-1-1-v20181011 project:coreos-cloud resources:{Accelerators:[]} metadata:0xc0004a8690 machine: tests:[]} with service account "1046294573453-compute@developer.gserviceaccount.com"
W0911 19:42:40.980] I0911 19:42:40.960517    4687 run_remote.go:567] Creating instance {image:ubuntu-gke-1804-d1703-0-v20181113 imageDesc:ubuntu-gke-1804-d1703-0-v20181113 project:ubuntu-os-gke-cloud resources:{Accelerators:[]} metadata:<nil> machine: tests:[]} with service account "1046294573453-compute@developer.gserviceaccount.com"
W0911 19:42:41.813] I0911 19:42:41.813180    4687 run_remote.go:742] Deleting instance ""
W0911 19:42:41.816] E0911 19:42:41.815888    4687 run_remote.go:745] Error deleting instance "": googleapi: got HTTP response code 404 with body: <!DOCTYPE html>
W0911 19:42:41.816] <html lang=en>
W0911 19:42:41.816]   <meta charset=utf-8>
W0911 19:42:41.816]   <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
W0911 19:42:41.816]   <title>Error 404 (Not Found)!!1</title>
W0911 19:42:41.816]   <style>
W0911 19:42:41.817]     *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
W0911 19:42:41.817]   </style>
W0911 19:42:41.818]   <a href=//www.google.com/><span id=logo aria-label=Google></span></a>
W0911 19:42:41.818]   <p><b>404.</b> <ins>That’s an error.</ins>
W0911 19:42:41.818]   <p>The requested URL <code>/compute/beta/projects/k8s-jkns-pr-node-e2e/zones/us-west1-b/instances/?alt=json&amp;prettyPrint=false</code> was not found on this server.  <ins>That’s all we know.</ins>
I0911 19:42:41.918] 
I0911 19:42:41.919] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0911 19:42:41.919] >                              START TEST                                >
I0911 19:42:41.919] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0911 19:42:41.920] Start Test Suite on Host 
I0911 19:42:41.920] 
I0911 19:42:41.920] Failure Finished Test Suite on Host 
I0911 19:42:41.920] unable to create gce instance with running docker daemon for image coreos-beta-1911-1-1-v20181011.  could not create instance tmp-node-e2e-50ee4ec7-coreos-beta-1911-1-1-v20181011: API error: googleapi: Error 400: Invalid value for field 'resource.disks[0].initializeParams.sourceImage': 'projects/coreos-cloud/global/images/coreos-beta-1911-1-1-v20181011'. The referenced image resource cannot be found., invalid
I0911 19:42:41.921] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0911 19:42:41.921] <                              FINISH TEST                               <
I0911 19:42:41.921] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0911 19:42:41.921] 
I0911 19:42:52.202] +++ [0911 19:42:52] Building go targets for linux/amd64:
I0911 19:42:52.202]     ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
... skipping 223 lines ...
I0911 19:58:19.964] STEP: Creating a kubernetes client
I0911 19:58:19.964] STEP: Building a namespace api object, basename init-container
I0911 19:58:19.964] Sep 11 19:50:39.950: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
I0911 19:58:19.965] Sep 11 19:50:39.950: INFO: Skipping waiting for service account
I0911 19:58:19.965] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:58:19.966]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0911 19:58:19.966] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0911 19:58:19.967]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:58:19.967] STEP: creating the pod
I0911 19:58:19.967] Sep 11 19:50:39.950: INFO: PodSpec: initContainers in spec.initContainers
I0911 19:58:19.968] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:58:19.968]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:58:19.968] Sep 11 19:50:46.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0911 19:58:19.970] Sep 11 19:50:52.940: INFO: namespace init-container-7753 deletion completed in 6.130716569s
I0911 19:58:19.971] 
I0911 19:58:19.971] 
I0911 19:58:19.972] • [SLOW TEST:13.057 seconds]
I0911 19:58:19.972] [k8s.io] InitContainer [NodeConformance]
I0911 19:58:19.972] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0911 19:58:19.973]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0911 19:58:19.973]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:58:19.974] ------------------------------
I0911 19:58:19.974] [BeforeEach] [sig-storage] EmptyDir volumes
I0911 19:58:19.975]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:58:19.975] STEP: Creating a kubernetes client
I0911 19:58:19.975] STEP: Building a namespace api object, basename emptydir
... skipping 3203 lines ...
I0911 19:58:20.692]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:58:20.692] STEP: Creating a kubernetes client
I0911 19:58:20.692] STEP: Building a namespace api object, basename init-container
I0911 19:58:20.693] Sep 11 19:54:46.504: INFO: Skipping waiting for service account
I0911 19:58:20.693] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:58:20.693]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0911 19:58:20.693] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0911 19:58:20.693]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:58:20.694] STEP: creating the pod
I0911 19:58:20.694] Sep 11 19:54:46.504: INFO: PodSpec: initContainers in spec.initContainers
I0911 19:58:20.702] Sep 11 19:55:31.354: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b6f97d5a-926a-4803-a7ec-670f2bf8bf6e", GenerateName:"", Namespace:"init-container-6750", SelfLink:"/api/v1/namespaces/init-container-6750/pods/pod-init-b6f97d5a-926a-4803-a7ec-670f2bf8bf6e", UID:"805b168e-5d15-405f-a61c-02899b11f756", ResourceVersion:"2071", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63703828486, loc:(*time.Location)(0xbe81a00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"504316786"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000ba5710), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-50ee4ec7-cos-stable-60-9592-84-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000ceb6e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ba57f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000ba5810)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000ba5850), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000ba5854), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828486, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828486, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828486, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828486, loc:(*time.Location)(0xbe81a00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.11", PodIP:"10.100.0.102", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.100.0.102"}}, StartTime:(*v1.Time)(0xc0009ec080), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0006e88c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0006e8930)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://1d7024b9f6b5737f8ecfd1abf9aba3109766418c10493459398b989bf21d4570", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0009ec0a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0009ec0c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc000ba59dc)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
I0911 19:58:20.702] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:58:20.703]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:58:20.703] Sep 11 19:55:31.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:58:20.703] STEP: Destroying namespace "init-container-6750" for this suite.
I0911 19:58:20.703] Sep 11 19:55:59.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0911 19:58:20.704] Sep 11 19:55:59.426: INFO: namespace init-container-6750 deletion completed in 28.068645989s
I0911 19:58:20.704] 
I0911 19:58:20.704] 
I0911 19:58:20.704] • [SLOW TEST:72.925 seconds]
I0911 19:58:20.704] [k8s.io] InitContainer [NodeConformance]
I0911 19:58:20.704] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0911 19:58:20.705]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0911 19:58:20.705]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:58:20.705] ------------------------------
I0911 19:58:20.705] [BeforeEach] [sig-storage] Downward API volume
I0911 19:58:20.705]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:58:20.705] STEP: Creating a kubernetes client
I0911 19:58:20.706] STEP: Building a namespace api object, basename downward-api
... skipping 933 lines ...
I0911 19:58:20.885] STEP: Creating a kubernetes client
I0911 19:58:20.885] STEP: Building a namespace api object, basename container-runtime
I0911 19:58:20.885] Sep 11 19:57:02.409: INFO: Skipping waiting for service account
I0911 19:58:20.885] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0911 19:58:20.886]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:58:20.886] STEP: create the container
I0911 19:58:20.886] STEP: wait for the container to reach Failed
I0911 19:58:20.886] STEP: get the container status
I0911 19:58:20.886] STEP: the container should be terminated
I0911 19:58:20.886] STEP: the termination message should be set
I0911 19:58:20.886] Sep 11 19:57:04.482: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0911 19:58:20.887] STEP: delete the container
I0911 19:58:20.887] [AfterEach] [k8s.io] Container Runtime
... skipping 248 lines ...
I0911 19:58:20.926] STEP: verifying the pod is in kubernetes
I0911 19:58:20.926] STEP: updating the pod
I0911 19:58:20.926] Sep 11 19:57:13.111: INFO: Successfully updated pod "pod-update-activedeadlineseconds-08bbba3b-eb80-429c-97e5-9bd221b63bac"
I0911 19:58:20.927] Sep 11 19:57:13.111: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-08bbba3b-eb80-429c-97e5-9bd221b63bac" in namespace "pods-6377" to be "terminated due to deadline exceeded"
I0911 19:58:20.927] Sep 11 19:57:13.117: INFO: Pod "pod-update-activedeadlineseconds-08bbba3b-eb80-429c-97e5-9bd221b63bac": Phase="Running", Reason="", readiness=true. Elapsed: 5.701936ms
I0911 19:58:20.927] Sep 11 19:57:15.130: INFO: Pod "pod-update-activedeadlineseconds-08bbba3b-eb80-429c-97e5-9bd221b63bac": Phase="Running", Reason="", readiness=true. Elapsed: 2.018770947s
I0911 19:58:20.927] Sep 11 19:57:17.131: INFO: Pod "pod-update-activedeadlineseconds-08bbba3b-eb80-429c-97e5-9bd221b63bac": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.020441929s
I0911 19:58:20.927] Sep 11 19:57:17.131: INFO: Pod "pod-update-activedeadlineseconds-08bbba3b-eb80-429c-97e5-9bd221b63bac" satisfied condition "terminated due to deadline exceeded"
I0911 19:58:20.928] [AfterEach] [k8s.io] Pods
I0911 19:58:20.928]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:58:20.928] Sep 11 19:57:17.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:58:20.928] STEP: Destroying namespace "pods-6377" for this suite.
I0911 19:58:20.928] Sep 11 19:57:29.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 168 lines ...
I0911 19:58:20.961] [BeforeEach] [k8s.io] Security Context
I0911 19:58:20.961]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
I0911 19:58:20.961] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0911 19:58:20.961]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
I0911 19:58:20.962] Sep 11 19:57:29.623: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-4d49e8b4-6ee9-411f-9551-b7becdb916d3" in namespace "security-context-test-1899" to be "success or failure"
I0911 19:58:20.962] Sep 11 19:57:29.635: INFO: Pod "busybox-readonly-true-4d49e8b4-6ee9-411f-9551-b7becdb916d3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.436585ms
I0911 19:58:20.962] Sep 11 19:57:31.636: INFO: Pod "busybox-readonly-true-4d49e8b4-6ee9-411f-9551-b7becdb916d3": Phase="Failed", Reason="", readiness=false. Elapsed: 2.013003727s
I0911 19:58:20.962] Sep 11 19:57:31.636: INFO: Pod "busybox-readonly-true-4d49e8b4-6ee9-411f-9551-b7becdb916d3" satisfied condition "success or failure"
I0911 19:58:20.963] [AfterEach] [k8s.io] Security Context
I0911 19:58:20.963]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:58:20.963] Sep 11 19:57:31.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:58:20.963] STEP: Destroying namespace "security-context-test-1899" for this suite.
I0911 19:58:20.964] Sep 11 19:57:37.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 365 lines ...
I0911 19:58:21.041]   should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
I0911 19:58:21.041]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:58:21.041] ------------------------------
I0911 19:58:21.041] I0911 19:58:12.435680    1272 e2e_node_suite_test.go:196] Stopping node services...
I0911 19:58:21.041] I0911 19:58:12.435707    1272 server.go:257] Kill server "services"
I0911 19:58:21.042] I0911 19:58:12.435716    1272 server.go:294] Killing process 1899 (services) with -TERM
I0911 19:58:21.042] E0911 19:58:12.549936    1272 services.go:88] Failed to stop services: error stopping "services": waitid: no child processes
I0911 19:58:21.042] I0911 19:58:12.549952    1272 server.go:257] Kill server "kubelet"
I0911 19:58:21.042] I0911 19:58:12.558690    1272 services.go:147] Fetching log files...
I0911 19:58:21.043] I0911 19:58:12.558731    1272 services.go:156] Get log file "kern.log" with journalctl command [-k].
I0911 19:58:21.043] I0911 19:58:12.659807    1272 services.go:156] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0911 19:58:21.043] I0911 19:58:13.102376    1272 services.go:156] Get log file "docker.log" with journalctl command [-u docker].
I0911 19:58:21.043] I0911 19:58:13.127969    1272 services.go:156] Get log file "kubelet.log" with journalctl command [-u kubelet-20190911T194847.service].
I0911 19:58:21.043] I0911 19:58:13.938044    1272 e2e_node_suite_test.go:201] Tests Finished
I0911 19:58:21.044] 
I0911 19:58:21.044] 
I0911 19:58:21.044] Ran 157 of 313 Specs in 550.247 seconds
I0911 19:58:21.044] SUCCESS! -- 157 Passed | 0 Failed | 0 Flaked | 0 Pending | 156 Skipped
I0911 19:58:21.044] 
I0911 19:58:21.044] 
I0911 19:58:21.044] Ginkgo ran 1 suite in 9m14.067794723s
I0911 19:58:21.045] Test Suite Passed
I0911 19:58:21.045] 
I0911 19:58:21.045] Success Finished Test Suite on Host tmp-node-e2e-50ee4ec7-cos-stable-60-9592-84-0
... skipping 152 lines ...
I0911 19:58:23.080] STEP: Creating a kubernetes client
I0911 19:58:23.080] STEP: Building a namespace api object, basename init-container
I0911 19:58:23.080] Sep 11 19:50:38.560: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
I0911 19:58:23.080] Sep 11 19:50:38.560: INFO: Skipping waiting for service account
I0911 19:58:23.080] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:58:23.081]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0911 19:58:23.081] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0911 19:58:23.081]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:58:23.081] STEP: creating the pod
I0911 19:58:23.081] Sep 11 19:50:38.560: INFO: PodSpec: initContainers in spec.initContainers
I0911 19:58:23.082] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:58:23.082]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:58:23.082] Sep 11 19:50:45.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0911 19:58:23.083] Sep 11 19:50:51.563: INFO: namespace init-container-5538 deletion completed in 6.128496779s
I0911 19:58:23.083] 
I0911 19:58:23.083] 
I0911 19:58:23.083] • [SLOW TEST:13.077 seconds]
I0911 19:58:23.083] [k8s.io] InitContainer [NodeConformance]
I0911 19:58:23.083] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0911 19:58:23.084]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0911 19:58:23.084]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:58:23.084] ------------------------------
I0911 19:58:23.084] [BeforeEach] [sig-storage] Projected configMap
I0911 19:58:23.084]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:58:23.085] STEP: Creating a kubernetes client
I0911 19:58:23.085] STEP: Building a namespace api object, basename projected
... skipping 3413 lines ...
I0911 19:58:24.068]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:58:24.068] STEP: Creating a kubernetes client
I0911 19:58:24.068] STEP: Building a namespace api object, basename init-container
I0911 19:58:24.069] Sep 11 19:54:55.150: INFO: Skipping waiting for service account
I0911 19:58:24.069] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:58:24.069]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0911 19:58:24.069] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0911 19:58:24.069]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:58:24.069] STEP: creating the pod
I0911 19:58:24.070] Sep 11 19:54:55.150: INFO: PodSpec: initContainers in spec.initContainers
I0911 19:58:24.078] Sep 11 19:55:45.381: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9c798a3e-b769-4dfb-adf6-fbc8b26e3a16", GenerateName:"", Namespace:"init-container-4454", SelfLink:"/api/v1/namespaces/init-container-4454/pods/pod-init-9c798a3e-b769-4dfb-adf6-fbc8b26e3a16", UID:"2a6e1fb1-3012-4edb-a66c-cfa7d5b44ccc", ResourceVersion:"2183", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63703828495, loc:(*time.Location)(0xbe81a00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"150097802"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0005b9850), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-50ee4ec7-cos-stable-63-10032-71-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000ca8ea0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0005b9a30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0005b9ac0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0005b9ad0), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0005b9ad4), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828495, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828495, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828495, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828495, loc:(*time.Location)(0xbe81a00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.12", PodIP:"10.100.0.103", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.100.0.103"}}, StartTime:(*v1.Time)(0xc000954820), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000aee930)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000aeea10)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"docker://a5cdf1006526b26219193f87cc0cfc4ee7a3c994fc264c24d529c4ba7cc3cf50", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000954840), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000954860), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0005b9e6c)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
I0911 19:58:24.078] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:58:24.079]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:58:24.079] Sep 11 19:55:45.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:58:24.079] STEP: Destroying namespace "init-container-4454" for this suite.
I0911 19:58:24.079] Sep 11 19:56:13.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0911 19:58:24.079] Sep 11 19:56:13.448: INFO: namespace init-container-4454 deletion completed in 28.057324987s
I0911 19:58:24.079] 
I0911 19:58:24.080] 
I0911 19:58:24.080] • [SLOW TEST:78.301 seconds]
I0911 19:58:24.080] [k8s.io] InitContainer [NodeConformance]
I0911 19:58:24.080] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0911 19:58:24.080]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0911 19:58:24.080]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:58:24.081] ------------------------------
I0911 19:58:24.081] [BeforeEach] [sig-storage] ConfigMap
I0911 19:58:24.081]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:58:24.081] STEP: Creating a kubernetes client
I0911 19:58:24.081] STEP: Building a namespace api object, basename configmap
... skipping 771 lines ...
I0911 19:58:24.245] STEP: Creating a kubernetes client
I0911 19:58:24.245] STEP: Building a namespace api object, basename container-runtime
I0911 19:58:24.245] Sep 11 19:57:02.532: INFO: Skipping waiting for service account
I0911 19:58:24.245] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0911 19:58:24.245]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:58:24.246] STEP: create the container
I0911 19:58:24.246] STEP: wait for the container to reach Failed
I0911 19:58:24.246] STEP: get the container status
I0911 19:58:24.246] STEP: the container should be terminated
I0911 19:58:24.246] STEP: the termination message should be set
I0911 19:58:24.246] Sep 11 19:57:04.641: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0911 19:58:24.247] STEP: delete the container
I0911 19:58:24.247] [AfterEach] [k8s.io] Container Runtime
... skipping 151 lines ...
I0911 19:58:24.276] STEP: submitting the pod to kubernetes
I0911 19:58:24.276] STEP: verifying the pod is in kubernetes
I0911 19:58:24.276] STEP: updating the pod
I0911 19:58:24.276] Sep 11 19:57:15.242: INFO: Successfully updated pod "pod-update-activedeadlineseconds-203e178e-d48a-4e3f-ab62-e5bb51cdc7ff"
I0911 19:58:24.276] Sep 11 19:57:15.242: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-203e178e-d48a-4e3f-ab62-e5bb51cdc7ff" in namespace "pods-7259" to be "terminated due to deadline exceeded"
I0911 19:58:24.277] Sep 11 19:57:15.258: INFO: Pod "pod-update-activedeadlineseconds-203e178e-d48a-4e3f-ab62-e5bb51cdc7ff": Phase="Running", Reason="", readiness=true. Elapsed: 16.354252ms
I0911 19:58:24.277] Sep 11 19:57:17.269: INFO: Pod "pod-update-activedeadlineseconds-203e178e-d48a-4e3f-ab62-e5bb51cdc7ff": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.026549602s
I0911 19:58:24.277] Sep 11 19:57:17.269: INFO: Pod "pod-update-activedeadlineseconds-203e178e-d48a-4e3f-ab62-e5bb51cdc7ff" satisfied condition "terminated due to deadline exceeded"
I0911 19:58:24.277] [AfterEach] [k8s.io] Pods
I0911 19:58:24.278]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:58:24.278] Sep 11 19:57:17.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:58:24.278] STEP: Destroying namespace "pods-7259" for this suite.
I0911 19:58:24.278] Sep 11 19:57:23.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 200 lines ...
I0911 19:58:24.317] [BeforeEach] [k8s.io] Security Context
I0911 19:58:24.318]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
I0911 19:58:24.318] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0911 19:58:24.318]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
I0911 19:58:24.318] Sep 11 19:57:25.944: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-e5f27b9c-fbcd-41ce-b345-28a874aa14d4" in namespace "security-context-test-7457" to be "success or failure"
I0911 19:58:24.318] Sep 11 19:57:25.954: INFO: Pod "busybox-readonly-true-e5f27b9c-fbcd-41ce-b345-28a874aa14d4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.703042ms
I0911 19:58:24.319] Sep 11 19:57:27.956: INFO: Pod "busybox-readonly-true-e5f27b9c-fbcd-41ce-b345-28a874aa14d4": Phase="Failed", Reason="", readiness=false. Elapsed: 2.01165139s
I0911 19:58:24.319] Sep 11 19:57:27.956: INFO: Pod "busybox-readonly-true-e5f27b9c-fbcd-41ce-b345-28a874aa14d4" satisfied condition "success or failure"
I0911 19:58:24.319] [AfterEach] [k8s.io] Security Context
I0911 19:58:24.319]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:58:24.320] Sep 11 19:57:27.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:58:24.320] STEP: Destroying namespace "security-context-test-7457" for this suite.
I0911 19:58:24.320] Sep 11 19:57:35.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 433 lines ...
I0911 19:58:24.400]   should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
I0911 19:58:24.400]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:167
I0911 19:58:24.400] ------------------------------
I0911 19:58:24.401] I0911 19:58:15.317547    1289 e2e_node_suite_test.go:196] Stopping node services...
I0911 19:58:24.401] I0911 19:58:15.317577    1289 server.go:257] Kill server "services"
I0911 19:58:24.401] I0911 19:58:15.317592    1289 server.go:294] Killing process 1890 (services) with -TERM
I0911 19:58:24.401] E0911 19:58:15.490203    1289 services.go:88] Failed to stop services: error stopping "services": waitid: no child processes
I0911 19:58:24.401] I0911 19:58:15.490220    1289 server.go:257] Kill server "kubelet"
I0911 19:58:24.402] I0911 19:58:15.499067    1289 services.go:147] Fetching log files...
I0911 19:58:24.402] I0911 19:58:15.499140    1289 services.go:156] Get log file "kern.log" with journalctl command [-k].
I0911 19:58:24.402] I0911 19:58:15.632676    1289 services.go:156] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0911 19:58:24.402] I0911 19:58:16.250794    1289 services.go:156] Get log file "docker.log" with journalctl command [-u docker].
I0911 19:58:24.402] I0911 19:58:16.284497    1289 services.go:156] Get log file "kubelet.log" with journalctl command [-u kubelet-20190911T194847.service].
I0911 19:58:24.403] I0911 19:58:17.325513    1289 e2e_node_suite_test.go:201] Tests Finished
I0911 19:58:24.403] 
I0911 19:58:24.403] 
I0911 19:58:24.403] Ran 157 of 313 Specs in 554.136 seconds
I0911 19:58:24.403] SUCCESS! -- 157 Passed | 0 Failed | 0 Flaked | 0 Pending | 156 Skipped
I0911 19:58:24.403] 
I0911 19:58:24.403] 
I0911 19:58:24.404] Ginkgo ran 1 suite in 9m17.365193913s
I0911 19:58:24.404] Test Suite Passed
I0911 19:58:24.404] 
I0911 19:58:24.404] Success Finished Test Suite on Host tmp-node-e2e-50ee4ec7-cos-stable-63-10032-71-0
... skipping 680 lines ...
I0911 19:59:01.891] STEP: Creating a kubernetes client
I0911 19:59:01.891] STEP: Building a namespace api object, basename init-container
I0911 19:59:01.891] Sep 11 19:50:48.429: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
I0911 19:59:01.891] Sep 11 19:50:48.429: INFO: Skipping waiting for service account
I0911 19:59:01.891] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:59:01.891]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0911 19:59:01.892] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0911 19:59:01.892]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:59:01.892] STEP: creating the pod
I0911 19:59:01.892] Sep 11 19:50:48.429: INFO: PodSpec: initContainers in spec.initContainers
I0911 19:59:01.899] Sep 11 19:51:38.868: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2cbd57dc-f8f5-4fee-b9db-f261592a0881", GenerateName:"", Namespace:"init-container-1580", SelfLink:"/api/v1/namespaces/init-container-1580/pods/pod-init-2cbd57dc-f8f5-4fee-b9db-f261592a0881", UID:"1ead9725-936a-4d37-8ae7-224e446276da", ResourceVersion:"463", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63703828251, loc:(*time.Location)(0xbe81a00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"429781236"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0010d4be0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-50ee4ec7-ubuntu-gke-1804-d1703-0-v20181113", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000fef500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010d4c50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0010d4c70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0010d4c80), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0010d4c84), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828251, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828251, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828251, loc:(*time.Location)(0xbe81a00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63703828251, loc:(*time.Location)(0xbe81a00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.10", PodIP:"10.100.0.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.100.0.4"}}, StartTime:(*v1.Time)(0xc0003ad080), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000858700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000858770)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://8fc957c0fa73e32fb3ab766291ba37657fda1bde405dd84af031cc857950531c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0003ad160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0003ad180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0010d4d74)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
I0911 19:59:01.900] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:59:01.900]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:59:01.900] Sep 11 19:51:38.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:59:01.901] STEP: Destroying namespace "init-container-1580" for this suite.
I0911 19:59:01.901] Sep 11 19:51:52.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0911 19:59:01.901] Sep 11 19:51:52.948: INFO: namespace init-container-1580 deletion completed in 14.069949233s
I0911 19:59:01.901] 
I0911 19:59:01.901] 
I0911 19:59:01.901] • [SLOW TEST:64.624 seconds]
I0911 19:59:01.902] [k8s.io] InitContainer [NodeConformance]
I0911 19:59:01.902] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0911 19:59:01.902]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0911 19:59:01.902]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:59:01.903] ------------------------------
I0911 19:59:01.903] [BeforeEach] [k8s.io] Kubelet
I0911 19:59:01.903]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:59:01.903] STEP: Creating a kubernetes client
I0911 19:59:01.903] STEP: Building a namespace api object, basename kubelet-test
... skipping 1423 lines ...
I0911 19:59:02.172]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:59:02.172] STEP: Creating a kubernetes client
I0911 19:59:02.173] STEP: Building a namespace api object, basename init-container
I0911 19:59:02.173] Sep 11 19:53:53.601: INFO: Skipping waiting for service account
I0911 19:59:02.173] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:59:02.173]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0911 19:59:02.173] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0911 19:59:02.174]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:59:02.174] STEP: creating the pod
I0911 19:59:02.174] Sep 11 19:53:53.601: INFO: PodSpec: initContainers in spec.initContainers
I0911 19:59:02.174] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0911 19:59:02.174]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:59:02.175] Sep 11 19:53:56.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0911 19:59:02.175] Sep 11 19:54:02.071: INFO: namespace init-container-7885 deletion completed in 6.05971543s
I0911 19:59:02.175] 
I0911 19:59:02.175] 
I0911 19:59:02.176] • [SLOW TEST:8.473 seconds]
I0911 19:59:02.176] [k8s.io] InitContainer [NodeConformance]
I0911 19:59:02.176] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0911 19:59:02.176]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0911 19:59:02.176]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:59:02.177] ------------------------------
I0911 19:59:02.177] [BeforeEach] [k8s.io] Container Runtime
I0911 19:59:02.177]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0911 19:59:02.177] STEP: Creating a kubernetes client
I0911 19:59:02.177] STEP: Building a namespace api object, basename container-runtime
... skipping 458 lines ...
I0911 19:59:02.285] [BeforeEach] [k8s.io] Security Context
I0911 19:59:02.285]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
I0911 19:59:02.285] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0911 19:59:02.285]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
I0911 19:59:02.286] Sep 11 19:54:36.519: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-eaa273b7-3593-47de-9f1f-f52e12cec717" in namespace "security-context-test-9625" to be "success or failure"
I0911 19:59:02.286] Sep 11 19:54:36.530: INFO: Pod "busybox-readonly-true-eaa273b7-3593-47de-9f1f-f52e12cec717": Phase="Pending", Reason="", readiness=false. Elapsed: 11.442436ms
I0911 19:59:02.286] Sep 11 19:54:38.532: INFO: Pod "busybox-readonly-true-eaa273b7-3593-47de-9f1f-f52e12cec717": Phase="Failed", Reason="", readiness=false. Elapsed: 2.013317674s
I0911 19:59:02.287] Sep 11 19:54:38.532: INFO: Pod "busybox-readonly-true-eaa273b7-3593-47de-9f1f-f52e12cec717" satisfied condition "success or failure"
I0911 19:59:02.287] [AfterEach] [k8s.io] Security Context
I0911 19:59:02.287]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:59:02.287] Sep 11 19:54:38.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:59:02.288] STEP: Destroying namespace "security-context-test-9625" for this suite.
I0911 19:59:02.288] Sep 11 19:54:44.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1682 lines ...
I0911 19:59:02.672] STEP: Creating a kubernetes client
I0911 19:59:02.673] STEP: Building a namespace api object, basename container-runtime
I0911 19:59:02.673] Sep 11 19:56:41.467: INFO: Skipping waiting for service account
I0911 19:59:02.673] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0911 19:59:02.673]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:59:02.674] STEP: create the container
I0911 19:59:02.674] STEP: wait for the container to reach Failed
I0911 19:59:02.674] STEP: get the container status
I0911 19:59:02.674] STEP: the container should be terminated
I0911 19:59:02.675] STEP: the termination message should be set
I0911 19:59:02.675] Sep 11 19:56:42.485: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0911 19:59:02.675] STEP: delete the container
I0911 19:59:02.675] [AfterEach] [k8s.io] Container Runtime
... skipping 658 lines ...
I0911 19:59:02.821] STEP: verifying the pod is in kubernetes
I0911 19:59:02.821] STEP: updating the pod
I0911 19:59:02.821] Sep 11 19:57:48.059: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ecc9d8e2-c16c-4f7f-8dc6-2c68cf5f7d07"
I0911 19:59:02.821] Sep 11 19:57:48.059: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ecc9d8e2-c16c-4f7f-8dc6-2c68cf5f7d07" in namespace "pods-2298" to be "terminated due to deadline exceeded"
I0911 19:59:02.821] Sep 11 19:57:48.061: INFO: Pod "pod-update-activedeadlineseconds-ecc9d8e2-c16c-4f7f-8dc6-2c68cf5f7d07": Phase="Running", Reason="", readiness=true. Elapsed: 1.505716ms
I0911 19:59:02.822] Sep 11 19:57:50.062: INFO: Pod "pod-update-activedeadlineseconds-ecc9d8e2-c16c-4f7f-8dc6-2c68cf5f7d07": Phase="Running", Reason="", readiness=true. Elapsed: 2.003221648s
I0911 19:59:02.822] Sep 11 19:57:52.064: INFO: Pod "pod-update-activedeadlineseconds-ecc9d8e2-c16c-4f7f-8dc6-2c68cf5f7d07": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.0049991s
I0911 19:59:02.822] Sep 11 19:57:52.064: INFO: Pod "pod-update-activedeadlineseconds-ecc9d8e2-c16c-4f7f-8dc6-2c68cf5f7d07" satisfied condition "terminated due to deadline exceeded"
I0911 19:59:02.823] [AfterEach] [k8s.io] Pods
I0911 19:59:02.823]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0911 19:59:02.823] Sep 11 19:57:52.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0911 19:59:02.823] STEP: Destroying namespace "pods-2298" for this suite.
I0911 19:59:02.823] Sep 11 19:57:58.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 238 lines ...
I0911 19:59:02.885]   should have monotonically increasing restart count [NodeConformance] [Conformance]
I0911 19:59:02.885]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0911 19:59:02.885] ------------------------------
I0911 19:59:02.886] I0911 19:58:53.431727    2567 e2e_node_suite_test.go:196] Stopping node services...
I0911 19:59:02.886] I0911 19:58:53.431777    2567 server.go:257] Kill server "services"
I0911 19:59:02.886] I0911 19:58:53.431788    2567 server.go:294] Killing process 3274 (services) with -TERM
I0911 19:59:02.887] E0911 19:58:53.478026    2567 services.go:88] Failed to stop services: error stopping "services": waitid: no child processes
I0911 19:59:02.887] I0911 19:58:53.478073    2567 server.go:257] Kill server "kubelet"
I0911 19:59:02.887] I0911 19:58:53.488486    2567 services.go:147] Fetching log files...
I0911 19:59:02.887] I0911 19:58:53.488569    2567 services.go:156] Get log file "kubelet.log" with journalctl command [-u kubelet-20190911T194847.service].
I0911 19:59:02.888] I0911 19:58:53.994208    2567 services.go:156] Get log file "kern.log" with journalctl command [-k].
I0911 19:59:02.888] I0911 19:58:54.017624    2567 services.go:156] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0911 19:59:02.888] I0911 19:58:54.035461    2567 services.go:156] Get log file "docker.log" with journalctl command [-u docker].
I0911 19:59:02.888] I0911 19:58:54.060975    2567 e2e_node_suite_test.go:201] Tests Finished
I0911 19:59:02.889] 
I0911 19:59:02.889] 
I0911 19:59:02.889] Ran 157 of 313 Specs in 591.139 seconds
I0911 19:59:02.889] SUCCESS! -- 157 Passed | 0 Failed | 0 Flaked | 0 Pending | 156 Skipped
I0911 19:59:02.889] 
I0911 19:59:02.889] 
I0911 19:59:02.890] Ginkgo ran 1 suite in 9m53.191308453s
I0911 19:59:02.890] Test Suite Passed
I0911 19:59:02.890] 
I0911 19:59:02.890] Success Finished Test Suite on Host tmp-node-e2e-50ee4ec7-ubuntu-gke-1804-d1703-0-v20181113
... skipping 6 lines ...
W0911 19:59:03.076] 2019/09/11 19:59:03 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml' finished in 17m11.127162072s
W0911 19:59:03.077] 2019/09/11 19:59:03 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0911 19:59:03.080] 2019/09/11 19:59:03 node.go:52: Noop - Node Down()
W0911 19:59:03.081] 2019/09/11 19:59:03 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0911 19:59:03.081] 2019/09/11 19:59:03 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0911 19:59:03.600] 2019/09/11 19:59:03 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 524.687651ms
W0911 19:59:03.601] 2019/09/11 19:59:03 main.go:319: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1]
W0911 19:59:03.604] Traceback (most recent call last):
W0911 19:59:03.604]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0911 19:59:03.615]     main(parse_args())
W0911 19:59:03.616]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0911 19:59:03.616]     mode.start(runner_args)
W0911 19:59:03.616]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0911 19:59:03.616]     check_env(env, self.command, *args)
W0911 19:59:03.616]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0911 19:59:03.616]     subprocess.check_call(cmd, env=env)
W0911 19:59:03.616]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0911 19:59:03.617]     raise CalledProcessError(retcode, cmd)
W0911 19:59:03.618] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-project=k8s-jkns-pr-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]" --skip="\\[Flaky\\]|\\[Slow\\]|\\[Serial\\]" --flakeAttempts=2', '--timeout=65m', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml')' returned non-zero exit status 1
E0911 19:59:03.633] Command failed
I0911 19:59:03.633] process 490 exited with code 1 after 17.2m
E0911 19:59:03.634] FAIL: pull-kubernetes-node-e2e
I0911 19:59:03.634] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0911 19:59:04.383] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0911 19:59:04.448] process 39316 exited with code 0 after 0.0m
I0911 19:59:04.449] Call:  gcloud config get-value account
I0911 19:59:04.855] process 39328 exited with code 0 after 0.0m
I0911 19:59:04.855] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0911 19:59:04.855] Upload result and artifacts...
I0911 19:59:04.855] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/81793/pull-kubernetes-node-e2e/1171871131548782592
I0911 19:59:04.856] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/81793/pull-kubernetes-node-e2e/1171871131548782592/artifacts
W0911 19:59:06.302] CommandException: One or more URLs matched no objects.
E0911 19:59:06.461] Command failed
I0911 19:59:06.461] process 39340 exited with code 1 after 0.0m
W0911 19:59:06.461] Remote dir gs://kubernetes-jenkins/pr-logs/pull/81793/pull-kubernetes-node-e2e/1171871131548782592/artifacts not exist yet
I0911 19:59:06.461] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/81793/pull-kubernetes-node-e2e/1171871131548782592/artifacts
I0911 19:59:10.178] process 39482 exited with code 0 after 0.1m
I0911 19:59:10.178] Call:  git rev-parse HEAD
I0911 19:59:10.182] process 40124 exited with code 0 after 0.0m
... skipping 21 lines ...