This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 166 succeeded
Started2019-08-24 19:58
Elapsed43m47s
Revision
Buildergke-prow-ssd-pool-1a225945-j823
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/2d200790-1fc1-4823-82dd-78f133d5f917/targets/test'}}
pod7d0ac42c-c6a9-11e9-b92a-32cb2abf7822
resultstorehttps://source.cloud.google.com/results/invocations/2d200790-1fc1-4823-82dd-78f133d5f917/targets/test
infra-commitaef6d7642
job-versionv1.17.0-alpha.0.631+a76d7fdf641c11
pod7d0ac42c-c6a9-11e9-b92a-32cb2abf7822
repok8s.io/kubernetes
repo-commita76d7fdf641c113eecd04ab81dfe574f008fcc80
repos{u'k8s.io/kubernetes': u'master'}
revisionv1.17.0-alpha.0.631+a76d7fdf641c11

Test Failures


Node Tests 42m27s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 166 Passed Tests

Show 150 Skipped Tests

Error lines from build-log.txt

... skipping 203 lines ...
W0824 20:00:46.729] runcmd:
W0824 20:00:46.729]   - mount /tmp /tmp -o remount,exec,suid
W0824 20:00:46.730]   - mkdir -p /home/containerd
W0824 20:00:46.730]   - mount --bind /home/containerd /home/containerd
W0824 20:00:46.730]   - mount -o remount,exec /home/containerd
W0824 20:00:46.730]   - mkdir -p /etc/containerd
W0824 20:00:46.730]   - 'curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /home/containerd/cni.template http://metadata.google.internal/computeMetadata/v1/instance/attributes/cni-template'
W0824 20:00:46.736]   - 'curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /etc/containerd/config.toml http://metadata.google.internal/computeMetadata/v1/instance/attributes/containerd-config'
W0824 20:00:46.736]   - 'curl --fail --retry 5 --retry-delay 3 --silent --show-error -o /home/containerd/cni.tgz https://storage.googleapis.com/kubernetes-release/network-plugins/cni-plugins-amd64-v0.7.5.tgz'
W0824 20:00:46.736]   - tar xzf /home/containerd/cni.tgz -C /home/containerd --overwrite
W0824 20:00:46.737]   - systemctl restart containerd
W0824 20:00:46.737] ]
I0824 20:00:46.896] Initializing e2e tests using image ubuntu.
I0824 20:00:46.897] Initializing e2e tests using image cos-stable.
I0824 20:00:46.899] make: Entering directory '/go/src/k8s.io/kubernetes'
... skipping 74 lines ...
W0824 20:00:47.015] runcmd:
W0824 20:00:47.016]   - mount /tmp /tmp -o remount,exec,suid
W0824 20:00:47.016]   - mkdir -p /home/containerd
W0824 20:00:47.016]   - mount --bind /home/containerd /home/containerd
W0824 20:00:47.016]   - mount -o remount,exec /home/containerd
W0824 20:00:47.016]   - mkdir -p /etc/containerd
W0824 20:00:47.016]   - 'curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /home/containerd/cni.template http://metadata.google.internal/computeMetadata/v1/instance/attributes/cni-template'
W0824 20:00:47.016]   - 'curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /etc/containerd/config.toml http://metadata.google.internal/computeMetadata/v1/instance/attributes/containerd-config'
W0824 20:00:47.017]   - 'curl --fail --retry 5 --retry-delay 3 --silent --show-error -o /home/containerd/cni.tgz https://storage.googleapis.com/kubernetes-release/network-plugins/cni-plugins-amd64-v0.7.5.tgz'
W0824 20:00:47.017]   - tar xzf /home/containerd/cni.tgz -C /home/containerd --overwrite
W0824 20:00:47.017]   - systemctl restart containerd
W0824 20:00:47.017] ]
W0824 20:00:47.017] I0824 20:00:46.896564    3981 remote.go:40] Building archive...
W0824 20:00:47.017] I0824 20:00:46.896830    3981 build.go:42] Building k8s binaries...
W0824 20:00:47.038] I0824 20:00:47.037463    3981 run_remote.go:567] Creating instance {image:ubuntu-gke-1804-d1809-0-v20190822 imageDesc:ubuntu-gke-1804-d1809-0-v20190822 project:ubuntu-os-gke-cloud resources:{Accelerators:[]} metadata:0xc000332150 machine: tests:[]} with service account "609727977121-compute@developer.gserviceaccount.com"
... skipping 28 lines ...
I0824 20:10:36.414] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0824 20:10:36.414] >                              START TEST                                >
I0824 20:10:36.414] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0824 20:10:36.414] Start Test Suite on Host 
I0824 20:10:36.414] 
I0824 20:10:36.414] Failure Finished Test Suite on Host 
I0824 20:10:36.415] unable to create gce instance with running docker daemon for image cos-73-11647-267-0.  googleapi: Error 404: The resource 'projects/cri-containerd-node-e2e/zones/us-west1-b/instances/tmp-node-e2e-dce034cd-cos-73-11647-267-0' was not found, notFound
I0824 20:10:36.415] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0824 20:10:36.415] <                              FINISH TEST                               <
I0824 20:10:36.415] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0824 20:10:36.415] 
W0824 20:10:36.516] E0824 20:10:36.413522    3981 run_remote.go:745] Error deleting instance "tmp-node-e2e-dce034cd-cos-73-11647-267-0": googleapi: Error 404: The resource 'projects/cri-containerd-node-e2e/zones/us-west1-b/instances/tmp-node-e2e-dce034cd-cos-73-11647-267-0' was not found, notFound
W0824 20:42:18.936] I0824 20:42:18.935780    3981 remote.go:122] Copying test artifacts from "tmp-node-e2e-dce034cd-ubuntu-gke-1804-d1809-0-v20190822"
W0824 20:42:24.146] I0824 20:42:24.146532    3981 run_remote.go:742] Deleting instance "tmp-node-e2e-dce034cd-ubuntu-gke-1804-d1809-0-v20190822"
I0824 20:42:24.818] 
I0824 20:42:24.818] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0824 20:42:24.818] >                              START TEST                                >
I0824 20:42:24.818] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
... skipping 277 lines ...
I0824 20:42:24.863] STEP: submitting the pod to kubernetes
I0824 20:42:24.863] STEP: verifying the pod is in kubernetes
I0824 20:42:24.864] STEP: updating the pod
I0824 20:42:24.864] Aug 24 20:09:00.375: INFO: Successfully updated pod "pod-update-activedeadlineseconds-98866da8-c8e6-40d4-a7b3-c6dd24f33005"
I0824 20:42:24.864] Aug 24 20:09:00.375: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-98866da8-c8e6-40d4-a7b3-c6dd24f33005" in namespace "pods-8680" to be "terminated due to deadline exceeded"
I0824 20:42:24.864] Aug 24 20:09:00.377: INFO: Pod "pod-update-activedeadlineseconds-98866da8-c8e6-40d4-a7b3-c6dd24f33005": Phase="Running", Reason="", readiness=true. Elapsed: 1.643084ms
I0824 20:42:24.864] Aug 24 20:09:02.379: INFO: Pod "pod-update-activedeadlineseconds-98866da8-c8e6-40d4-a7b3-c6dd24f33005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.003370745s
I0824 20:42:24.865] Aug 24 20:09:02.379: INFO: Pod "pod-update-activedeadlineseconds-98866da8-c8e6-40d4-a7b3-c6dd24f33005" satisfied condition "terminated due to deadline exceeded"
I0824 20:42:24.865] [AfterEach] [k8s.io] Pods
I0824 20:42:24.865]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0824 20:42:24.865] Aug 24 20:09:02.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0824 20:42:24.865] STEP: Destroying namespace "pods-8680" for this suite.
I0824 20:42:24.865] Aug 24 20:09:08.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1193 lines ...
I0824 20:42:25.007]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0824 20:42:25.007] STEP: Creating a kubernetes client
I0824 20:42:25.007] STEP: Building a namespace api object, basename init-container
I0824 20:42:25.008] Aug 24 20:09:50.896: INFO: Skipping waiting for service account
I0824 20:42:25.008] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0824 20:42:25.008]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0824 20:42:25.008] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0824 20:42:25.008]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0824 20:42:25.008] STEP: creating the pod
I0824 20:42:25.008] Aug 24 20:09:50.897: INFO: PodSpec: initContainers in spec.initContainers
I0824 20:42:25.014] Aug 24 20:10:41.791: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-88d8d148-3c58-4f57-a734-66bfbccd0dc9", GenerateName:"", Namespace:"init-container-451", SelfLink:"/api/v1/namespaces/init-container-451/pods/pod-init-88d8d148-3c58-4f57-a734-66bfbccd0dc9", UID:"d8d8022b-6b1a-4300-bf01-469f1bf2ac1a", ResourceVersion:"913", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63702274190, loc:(*time.Location)(0xba6bca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"897007351"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000b709e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-dce034cd-ubuntu-gke-1804-d1809-0-v20190822", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001105b00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000b70a50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000b70a70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000b70a80), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000b70a84), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63702274190, loc:(*time.Location)(0xba6bca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63702274190, loc:(*time.Location)(0xba6bca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63702274190, loc:(*time.Location)(0xba6bca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63702274190, loc:(*time.Location)(0xba6bca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.134", PodIP:"10.100.0.29", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.100.0.29"}}, StartTime:(*v1.Time)(0xc0008e9220), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0008e9240), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000346e70)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"containerd://46241502ffad9c0e28eb95fe63dbbcb0216d5a91796ad6af086635868dc2eee3"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0008e9260), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0008e9280), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
I0824 20:42:25.014] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0824 20:42:25.014]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0824 20:42:25.015] Aug 24 20:10:41.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0824 20:42:25.015] STEP: Destroying namespace "init-container-451" for this suite.
I0824 20:42:25.015] Aug 24 20:11:03.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0824 20:42:25.015] Aug 24 20:11:03.927: INFO: namespace init-container-451 deletion completed in 22.128979998s
I0824 20:42:25.015] 
I0824 20:42:25.015] 
I0824 20:42:25.015] • [SLOW TEST:73.034 seconds]
I0824 20:42:25.016] [k8s.io] InitContainer [NodeConformance]
I0824 20:42:25.016] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0824 20:42:25.016]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0824 20:42:25.016]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0824 20:42:25.016] ------------------------------
I0824 20:42:25.017] [BeforeEach] [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv]
I0824 20:42:25.017]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gke_environment_test.go:315
I0824 20:42:25.017] Aug 24 20:11:03.932: INFO: Skipped because system spec name "" is not in [gke]
I0824 20:42:25.017] 
... skipping 873 lines ...
I0824 20:42:25.131] [BeforeEach] [k8s.io] Security Context
I0824 20:42:25.131]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
I0824 20:42:25.131] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
I0824 20:42:25.132]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:211
I0824 20:42:25.132] Aug 24 20:12:17.669: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-30fc75ee-ca67-424a-aceb-5898ca3bd390" in namespace "security-context-test-6045" to be "success or failure"
I0824 20:42:25.132] Aug 24 20:12:17.684: INFO: Pod "busybox-readonly-true-30fc75ee-ca67-424a-aceb-5898ca3bd390": Phase="Pending", Reason="", readiness=false. Elapsed: 15.365829ms
I0824 20:42:25.132] Aug 24 20:12:19.686: INFO: Pod "busybox-readonly-true-30fc75ee-ca67-424a-aceb-5898ca3bd390": Phase="Failed", Reason="", readiness=false. Elapsed: 2.017144531s
I0824 20:42:25.132] Aug 24 20:12:19.686: INFO: Pod "busybox-readonly-true-30fc75ee-ca67-424a-aceb-5898ca3bd390" satisfied condition "success or failure"
I0824 20:42:25.132] [AfterEach] [k8s.io] Security Context
I0824 20:42:25.132]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0824 20:42:25.133] Aug 24 20:12:19.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0824 20:42:25.133] STEP: Destroying namespace "security-context-test-6045" for this suite.
I0824 20:42:25.133] Aug 24 20:12:25.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 427 lines ...
I0824 20:42:25.183] STEP: Creating a kubernetes client
I0824 20:42:25.183] STEP: Building a namespace api object, basename container-runtime
I0824 20:42:25.184] Aug 24 20:13:01.712: INFO: Skipping waiting for service account
I0824 20:42:25.184] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0824 20:42:25.184]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0824 20:42:25.184] STEP: create the container
I0824 20:42:25.184] STEP: wait for the container to reach Failed
I0824 20:42:25.184] STEP: get the container status
I0824 20:42:25.184] STEP: the container should be terminated
I0824 20:42:25.184] STEP: the termination message should be set
I0824 20:42:25.184] Aug 24 20:13:02.733: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0824 20:42:25.185] STEP: delete the container
I0824 20:42:25.185] [AfterEach] [k8s.io] Container Runtime
... skipping 2107 lines ...
I0824 20:42:25.460]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I0824 20:42:25.460] STEP: Creating a kubernetes client
I0824 20:42:25.460] STEP: Building a namespace api object, basename init-container
I0824 20:42:25.460] Aug 24 20:17:45.598: INFO: Skipping waiting for service account
I0824 20:42:25.460] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0824 20:42:25.460]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
I0824 20:42:25.461] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0824 20:42:25.461]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0824 20:42:25.461] STEP: creating the pod
I0824 20:42:25.461] Aug 24 20:17:45.598: INFO: PodSpec: initContainers in spec.initContainers
I0824 20:42:25.461] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0824 20:42:25.461]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0824 20:42:25.462] Aug 24 20:17:47.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
I0824 20:42:25.462] Aug 24 20:17:53.404: INFO: namespace init-container-6081 deletion completed in 6.049351535s
I0824 20:42:25.462] 
I0824 20:42:25.462] 
I0824 20:42:25.463] • [SLOW TEST:7.810 seconds]
I0824 20:42:25.463] [k8s.io] InitContainer [NodeConformance]
I0824 20:42:25.463] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
I0824 20:42:25.463]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0824 20:42:25.463]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
I0824 20:42:25.463] ------------------------------
I0824 20:42:25.463] S
I0824 20:42:25.463] ------------------------------
I0824 20:42:25.464] [BeforeEach] [sig-storage] HostPath
I0824 20:42:25.464]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 283 lines ...
I0824 20:42:25.514] Aug 24 20:16:04.025: INFO: Skipping waiting for service account
I0824 20:42:25.515] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0824 20:42:25.515]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0824 20:42:25.515] STEP: create the container
I0824 20:42:25.515] STEP: check the container status
I0824 20:42:25.515] STEP: delete the container
I0824 20:42:25.515] Aug 24 20:21:04.769: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0824 20:42:25.515] STEP: create the container
I0824 20:42:25.515] STEP: check the container status
I0824 20:42:25.516] STEP: delete the container
I0824 20:42:25.516] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0824 20:42:25.516]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0824 20:42:25.516] Aug 24 20:21:07.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 52 lines ...
I0824 20:42:25.525] I0824 20:42:17.762186    2859 services.go:156] Get log file "containerd.log" with journalctl command [-u containerd].
I0824 20:42:25.526] I0824 20:42:18.079100    2859 services.go:156] Get log file "kubelet.log" with journalctl command [-u kubelet-20190824T200643.service].
I0824 20:42:25.526] I0824 20:42:18.865536    2859 e2e_node_suite_test.go:201] Tests Finished
I0824 20:42:25.526] 
I0824 20:42:25.526] 
I0824 20:42:25.526] Ran 159 of 309 Specs in 2120.073 seconds
I0824 20:42:25.526] SUCCESS! -- 159 Passed | 0 Failed | 0 Flaked | 0 Pending | 150 Skipped
I0824 20:42:25.526] 
I0824 20:42:25.527] 
I0824 20:42:25.527] Ginkgo ran 1 suite in 35m22.230629443s
I0824 20:42:25.527] Test Suite Passed
I0824 20:42:25.527] 
I0824 20:42:25.527] Success Finished Test Suite on Host tmp-node-e2e-dce034cd-ubuntu-gke-1804-d1809-0-v20190822
... skipping 6 lines ...
W0824 20:42:25.634] 2019/08/24 20:42:25 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config.yaml' finished in 42m27.276973285s
W0824 20:42:25.634] 2019/08/24 20:42:25 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0824 20:42:25.635] 2019/08/24 20:42:25 node.go:52: Noop - Node Down()
W0824 20:42:25.640] 2019/08/24 20:42:25 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0824 20:42:25.640] 2019/08/24 20:42:25 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0824 20:42:25.977] 2019/08/24 20:42:25 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 337.169075ms
W0824 20:42:25.981] 2019/08/24 20:42:25 main.go:319: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config.yaml: exit status 1]
W0824 20:42:25.984] Traceback (most recent call last):
W0824 20:42:25.984]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0824 20:42:25.984]     main(parse_args())
W0824 20:42:25.984]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0824 20:42:25.985]     mode.start(runner_args)
W0824 20:42:25.985]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0824 20:42:25.985]     check_env(env, self.command, *args)
W0824 20:42:25.986]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0824 20:42:25.986]     subprocess.check_call(cmd, env=env)
W0824 20:42:25.986]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0824 20:42:25.986]     raise CalledProcessError(retcode, cmd)
W0824 20:42:25.987] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config.yaml', '--gcp-project=cri-containerd-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\\"name\\": \\"containerd.log\\", \\"journalctl\\": [\\"-u\\", \\"containerd\\"]}"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]" --skip="\\[Flaky\\]|\\[Serial\\]" --flakeAttempts=2', '--timeout=65m')' returned non-zero exit status 1
E0824 20:42:25.995] Command failed
I0824 20:42:25.996] process 309 exited with code 1 after 42.5m
E0824 20:42:25.996] FAIL: ci-cos-containerd-node-e2e
I0824 20:42:25.996] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0824 20:42:26.541] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0824 20:42:26.599] process 37998 exited with code 0 after 0.0m
I0824 20:42:26.599] Call:  gcloud config get-value account
I0824 20:42:26.960] process 38010 exited with code 0 after 0.0m
I0824 20:42:26.960] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0824 20:42:26.961] Upload result and artifacts...
I0824 20:42:26.961] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-cos-containerd-node-e2e/1165352730668044289
I0824 20:42:26.961] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-cos-containerd-node-e2e/1165352730668044289/artifacts
W0824 20:42:28.120] CommandException: One or more URLs matched no objects.
E0824 20:42:28.259] Command failed
I0824 20:42:28.260] process 38022 exited with code 1 after 0.0m
W0824 20:42:28.260] Remote dir gs://kubernetes-jenkins/logs/ci-cos-containerd-node-e2e/1165352730668044289/artifacts not exist yet
I0824 20:42:28.260] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-cos-containerd-node-e2e/1165352730668044289/artifacts
I0824 20:42:30.981] process 38164 exited with code 0 after 0.0m
I0824 20:42:30.982] Call:  git rev-parse HEAD
I0824 20:42:30.986] process 38733 exited with code 0 after 0.0m
... skipping 13 lines ...