This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE

No Test Failures!


Error lines from build-log.txt

... skipping 293 lines ...
W0521 17:22:08.919] I0521 17:22:08.918946    4243 node_e2e.go:147] Starting tests on "tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911"
W0521 17:22:09.687] I0521 17:22:09.686040    4243 node_e2e.go:91] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0521 17:22:09.687] I0521 17:22:09.686078    4243 node_e2e.go:147] Starting tests on "tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0"
W0521 17:22:09.779] I0521 17:22:09.778837    4243 node_e2e.go:91] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0521 17:22:09.779] I0521 17:22:09.778871    4243 node_e2e.go:147] Starting tests on "tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0"
W0521 17:22:10.192] I0521 17:22:10.192197    4243 node_e2e.go:147] Starting tests on "tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113"
W0521 18:33:09.094] I0521 18:33:09.093306    4243 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0521 18:33:10.351] I0521 18:33:10.350489    4243 remote.go:202] Got the system logs from journald; copying it back...
W0521 18:33:11.768] I0521 18:33:11.768178    4243 remote.go:122] Copying test artifacts from "tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0"
W0521 18:33:20.046] I0521 18:33:20.046000    4243 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0521 18:33:20.266] I0521 18:33:20.266661    4243 run_remote.go:718] Deleting instance "tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0"
I0521 18:33:20.640] 
I0521 18:33:20.640] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0521 18:33:20.641] >                              START TEST                                >
I0521 18:33:20.641] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0521 18:33:20.641] Start Test Suite on Host tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0
... skipping 77 lines ...
I0521 18:33:20.651]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:20.652] STEP: Creating Pod
I0521 18:33:20.652] STEP: Waiting for the pod running
I0521 18:33:20.652] STEP: Geting the pod
I0521 18:33:20.652] STEP: Reading file content from the nginx-container
I0521 18:33:20.652] May 21 17:23:21.956: INFO: Running ' --server=http://127.0.0.1:8080 exec pod-sharedvolume-fbe452bb-c0e1-41ac-a774-fbc4b6b94b4f -c busybox-main-container --namespace=emptydir-8077 -- cat /usr/share/volumeshare/shareddata.txt'
I0521 18:33:20.652] May 21 17:23:21.957: INFO: Unexpected error occurred: error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-fbe452bb-c0e1-41ac-a774-fbc4b6b94b4f -c busybox-main-container --namespace=emptydir-8077 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc001003698 0xc0010036b0 0xc0010036c8] [0xc001003698 0xc0010036b0 0xc0010036c8] [0xc0010036a8 0xc0010036c0] [0xef22d0 0xef22d0] <nil> <nil>}:
I0521 18:33:20.652] Command stdout:
I0521 18:33:20.652] 
I0521 18:33:20.653] stderr:
I0521 18:33:20.653] 
I0521 18:33:20.653] error:
I0521 18:33:20.653] fork/exec : no such file or directory
I0521 18:33:20.653] [AfterEach] [sig-storage] EmptyDir volumes
I0521 18:33:20.653]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:20.653] STEP: Collecting events from namespace "emptydir-8077".
I0521 18:33:20.653] STEP: Found 0 events.
I0521 18:33:20.653] May 21 17:23:21.965: INFO: POD                                                    NODE                                           PHASE    GRACE  CONDITIONS
... skipping 20 lines ...
I0521 18:33:20.660] • Failure [8.178 seconds]
I0521 18:33:20.660] [sig-storage] EmptyDir volumes
I0521 18:33:20.660] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
I0521 18:33:20.660]   pod should support shared volumes between containers [Conformance] [It]
I0521 18:33:20.660]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:20.660] 
I0521 18:33:20.660]   Unexpected error:
I0521 18:33:20.660]       <*errors.errorString | 0xc0007ae660>: {
I0521 18:33:20.660]           s: "error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-fbe452bb-c0e1-41ac-a774-fbc4b6b94b4f -c busybox-main-container --namespace=emptydir-8077 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc001003698 0xc0010036b0 0xc0010036c8] [0xc001003698 0xc0010036b0 0xc0010036c8] [0xc0010036a8 0xc0010036c0] [0xef22d0 0xef22d0] <nil> <nil>}:\nCommand stdout:\n\nstderr:\n\nerror:\nfork/exec : no such file or directory",
I0521 18:33:20.661]       }
I0521 18:33:20.661]       error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-fbe452bb-c0e1-41ac-a774-fbc4b6b94b4f -c busybox-main-container --namespace=emptydir-8077 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc001003698 0xc0010036b0 0xc0010036c8] [0xc001003698 0xc0010036b0 0xc0010036c8] [0xc0010036a8 0xc0010036c0] [0xef22d0 0xef22d0] <nil> <nil>}:
I0521 18:33:20.661]       Command stdout:
I0521 18:33:20.661]       
I0521 18:33:20.661]       stderr:
I0521 18:33:20.661]       
I0521 18:33:20.661]       error:
I0521 18:33:20.661]       fork/exec : no such file or directory
I0521 18:33:20.661]   occurred
I0521 18:33:20.661] 
I0521 18:33:20.662]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218
I0521 18:33:20.662] ------------------------------
I0521 18:33:20.662] SSSSSSSSSS
... skipping 1280 lines ...
I0521 18:33:20.895] I0521 17:27:26.269855    1304 util.go:44] Running readiness check for service "kubelet"
I0521 18:33:20.895] I0521 17:27:27.271560    1304 server.go:182] Initial health check passed for service "kubelet"
I0521 18:33:20.895] I0521 17:27:30.428044    1304 util.go:221] new configuration has taken effect
I0521 18:33:20.895] [It] NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
I0521 18:33:20.895]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:115
I0521 18:33:20.895] STEP: running the workload and waiting for success
I0521 18:33:20.896] May 21 17:27:32.442: INFO: Unexpected error occurred: pod ran to completion
I0521 18:33:20.896] May 21 17:27:32.450: INFO: Waiting for pod npb-ep-pod to disappear
I0521 18:33:20.896] May 21 17:27:32.453: INFO: Pod npb-ep-pod no longer exists
I0521 18:33:20.896] STEP: running the post test exec from the workload
I0521 18:33:20.896] I0521 17:27:37.285154    1304 server.go:222] Restarting server "kubelet" with restart command
I0521 18:33:20.896] I0521 17:27:37.297739    1304 server.go:171] Running health check for service "kubelet"
I0521 18:33:20.896] I0521 17:27:37.300029    1304 util.go:44] Running readiness check for service "kubelet"
... skipping 6 lines ...
I0521 18:33:20.897] STEP: Found 1 events.
I0521 18:33:20.897] May 21 17:27:42.490: INFO: At 2019-05-21 17:27:30 +0000 UTC - event for npb-ep-pod: {kubelet tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0} OutOfcpu: Node didn't have enough resource: cpu, requested: 15000, used: 0, capacity: 800
I0521 18:33:20.898] May 21 17:27:42.492: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0521 18:33:20.898] May 21 17:27:42.492: INFO: 
I0521 18:33:20.898] May 21 17:27:42.495: INFO: 
I0521 18:33:20.898] Logging node info for node tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0
I0521 18:33:20.902] May 21 17:27:42.496: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0,UID:ec6c1eb6-a934-4c93-9e39-20a2153c8ba4,ResourceVersion:1092,Generation:0,CreationTimestamp:2019-05-21 17:23:18 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-rtdpf,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16701562880 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885535232 0} {<nil>} 3794468Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15031406568 0} {<nil>} 15031406568 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623391232 0} {<nil>} 3538468Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 17:27:37 +0000 UTC 2019-05-21 17:23:15 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 17:27:37 +0000 UTC 2019-05-21 17:23:15 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 17:27:37 +0000 UTC 2019-05-21 17:23:15 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 17:27:37 +0000 UTC 2019-05-21 17:23:15 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.138.0.44} {Hostname tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:14690504d32589f517351ff7955b0543,SystemUUID:14690504-D325-89F5-1735-1FF7955B0543,BootID:c51344dc-21a5-4a94-97df-ae0963175ad0,KernelVersion:4.4.64+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://1.13.1,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-rtdpf,UID:8d7838d9-653e-487e-9763-f87615167575,ResourceVersion:1081,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-rtdpf,UID:8d7838d9-653e-487e-9763-f87615167575,ResourceVersion:1081,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0521 18:33:20.902] May 21 17:27:42.497: INFO: 
I0521 18:33:20.902] Logging kubelet events for node tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0
I0521 18:33:20.902] May 21 17:27:42.498: INFO: 
I0521 18:33:20.902] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0
I0521 18:33:20.902] W0521 17:27:42.505557    1304 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:33:20.903] May 21 17:27:42.520: INFO: 
... skipping 8 lines ...
I0521 18:33:20.904] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
I0521 18:33:20.904]   Run node performance testing with pre-defined workloads
I0521 18:33:20.904]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:111
I0521 18:33:20.904]     NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload [It]
I0521 18:33:20.904]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:115
I0521 18:33:20.904] 
I0521 18:33:20.904]     Unexpected error:
I0521 18:33:20.904]         <*errors.errorString | 0xc00054b0c0>: {
I0521 18:33:20.904]             s: "pod ran to completion",
I0521 18:33:20.905]         }
I0521 18:33:20.905]         pod ran to completion
I0521 18:33:20.905]     occurred
I0521 18:33:20.905] 
I0521 18:33:20.905]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:20.905] ------------------------------
I0521 18:33:20.905] SSSSS
I0521 18:33:20.905] ------------------------------
I0521 18:33:20.905] [sig-storage] ConfigMap 
I0521 18:33:20.905]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:20.905]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:550
I0521 18:33:20.906] [BeforeEach] [sig-storage] ConfigMap
I0521 18:33:20.906]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:20.906] STEP: Creating a kubernetes client
I0521 18:33:20.906] STEP: Building a namespace api object, basename configmap
I0521 18:33:20.906] May 21 17:27:48.573: INFO: Skipping waiting for service account
I0521 18:33:20.906] [It] Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:20.906]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:550
I0521 18:33:20.906] May 21 17:27:48.574: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:20.906] STEP: Creating the pod
I0521 18:33:20.907] [AfterEach] [sig-storage] ConfigMap
I0521 18:33:20.907]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:20.907] May 21 17:32:48.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:20.907] STEP: Destroying namespace "configmap-7935" for this suite.
I0521 18:33:20.907] May 21 17:33:10.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:20.907] May 21 17:33:10.642: INFO: namespace configmap-7935 deletion completed in 22.044077182s
I0521 18:33:20.907] 
I0521 18:33:20.907] • [SLOW TEST:322.071 seconds]
I0521 18:33:20.907] [sig-storage] ConfigMap
I0521 18:33:20.907] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
I0521 18:33:20.908]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:20.908]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:550
I0521 18:33:20.908] ------------------------------
I0521 18:33:20.908] [sig-node] RuntimeClass 
I0521 18:33:20.908]   should reject a Pod requesting a non-existent RuntimeClass
I0521 18:33:20.908]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:46
I0521 18:33:20.908] [BeforeEach] [sig-node] RuntimeClass
... skipping 16 lines ...
I0521 18:33:20.910]   should reject a Pod requesting a non-existent RuntimeClass
I0521 18:33:20.910]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:46
I0521 18:33:20.910] ------------------------------
I0521 18:33:20.910] SSSSSS
I0521 18:33:20.910] ------------------------------
I0521 18:33:20.911] [sig-storage] Projected configMap 
I0521 18:33:20.911]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:20.911]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:491
I0521 18:33:20.911] [BeforeEach] [sig-storage] Projected configMap
I0521 18:33:20.911]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:20.911] STEP: Creating a kubernetes client
I0521 18:33:20.911] STEP: Building a namespace api object, basename projected
I0521 18:33:20.911] May 21 17:33:26.758: INFO: Skipping waiting for service account
I0521 18:33:20.911] [It] Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:20.912]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:491
I0521 18:33:20.912] May 21 17:33:26.759: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:20.912] STEP: Creating the pod
I0521 18:33:20.912] [AfterEach] [sig-storage] Projected configMap
I0521 18:33:20.912]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:20.912] May 21 17:38:26.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:20.912] STEP: Destroying namespace "projected-3561" for this suite.
I0521 18:33:20.912] May 21 17:38:48.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:20.912] May 21 17:38:48.824: INFO: namespace projected-3561 deletion completed in 22.050717161s
I0521 18:33:20.912] 
I0521 18:33:20.913] • [SLOW TEST:322.070 seconds]
I0521 18:33:20.913] [sig-storage] Projected configMap
I0521 18:33:20.913] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
I0521 18:33:20.913]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:20.913]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:491
I0521 18:33:20.913] ------------------------------
I0521 18:33:20.913] SSSSSSSS
I0521 18:33:20.913] ------------------------------
I0521 18:33:20.913] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads 
I0521 18:33:20.914]   TensorFlow workload
... skipping 12 lines ...
I0521 18:33:20.915] I0521 17:38:58.303293    1304 util.go:44] Running readiness check for service "kubelet"
I0521 18:33:20.915] I0521 17:38:58.927029    1304 util.go:221] new configuration has taken effect
I0521 18:33:20.915] [It] TensorFlow workload
I0521 18:33:20.915]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:124
I0521 18:33:20.915] STEP: running the workload and waiting for success
I0521 18:33:20.916] I0521 17:38:59.305021    1304 server.go:182] Initial health check passed for service "kubelet"
I0521 18:33:20.916] May 21 17:39:00.960: INFO: Unexpected error occurred: pod ran to completion
I0521 18:33:20.916] May 21 17:39:00.968: INFO: Waiting for pod tensorflow-wide-deep-pod to disappear
I0521 18:33:20.916] May 21 17:39:00.971: INFO: Pod tensorflow-wide-deep-pod no longer exists
I0521 18:33:20.916] STEP: running the post test exec from the workload
I0521 18:33:20.916] E0521 17:39:10.994840    1304 util.go:268] /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Length:[151] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 21 May 2019 17:39:10 GMT]] Body:0xc000251ac0 ContentLength:151 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0010fae00 TLS:<nil>}
I0521 18:33:20.917] I0521 17:39:11.319952    1304 server.go:222] Restarting server "kubelet" with restart command
I0521 18:33:20.917] I0521 17:39:11.334269    1304 server.go:171] Running health check for service "kubelet"
... skipping 6 lines ...
I0521 18:33:20.918] STEP: Found 1 events.
I0521 18:33:20.918] May 21 17:39:16.011: INFO: At 2019-05-21 17:38:58 +0000 UTC - event for tensorflow-wide-deep-pod: {kubelet tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0} OutOfcpu: Node didn't have enough resource: cpu, requested: 15000, used: 0, capacity: 1000
I0521 18:33:20.918] May 21 17:39:16.012: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0521 18:33:20.918] May 21 17:39:16.012: INFO: 
I0521 18:33:20.918] May 21 17:39:16.015: INFO: 
I0521 18:33:20.918] Logging node info for node tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0
I0521 18:33:20.922] May 21 17:39:16.017: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0,UID:ec6c1eb6-a934-4c93-9e39-20a2153c8ba4,ResourceVersion:1330,Generation:0,CreationTimestamp:2019-05-21 17:23:18 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-rtrxc,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16701562880 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885535232 0} {<nil>} 3794468Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15031406568 0} {<nil>} 15031406568 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623391232 0} {<nil>} 3538468Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 17:39:11 +0000 UTC 2019-05-21 17:23:15 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 17:39:11 +0000 UTC 2019-05-21 17:23:15 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 17:39:11 +0000 UTC 2019-05-21 17:23:15 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 17:39:11 +0000 UTC 2019-05-21 17:23:15 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.138.0.44} {Hostname tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:14690504d32589f517351ff7955b0543,SystemUUID:14690504-D325-89F5-1735-1FF7955B0543,BootID:c51344dc-21a5-4a94-97df-ae0963175ad0,KernelVersion:4.4.64+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://1.13.1,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-rtrxc,UID:8d119032-a51a-4e4e-9f9a-6a034f801921,ResourceVersion:1318,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-rtrxc,UID:8d119032-a51a-4e4e-9f9a-6a034f801921,ResourceVersion:1318,KubeletConfigKey:kubelet,},},LastKnownGood:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-rtdpf,UID:8d7838d9-653e-487e-9763-f87615167575,ResourceVersion:1081,KubeletConfigKey:kubelet,},},Error:,},},}
I0521 18:33:20.923] May 21 17:39:16.017: INFO: 
I0521 18:33:20.923] Logging kubelet events for node tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0
I0521 18:33:20.923] May 21 17:39:16.018: INFO: 
I0521 18:33:20.923] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0
I0521 18:33:20.923] W0521 17:39:16.024665    1304 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:33:20.923] May 21 17:39:16.036: INFO: 
... skipping 8 lines ...
I0521 18:33:20.924] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
I0521 18:33:20.924]   Run node performance testing with pre-defined workloads
I0521 18:33:20.924]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:120
I0521 18:33:20.924]     TensorFlow workload [It]
I0521 18:33:20.925]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:124
I0521 18:33:20.925] 
I0521 18:33:20.925]     Unexpected error:
I0521 18:33:20.925]         <*errors.errorString | 0xc00054b0c0>: {
I0521 18:33:20.925]             s: "pod ran to completion",
I0521 18:33:20.925]         }
I0521 18:33:20.925]         pod ran to completion
I0521 18:33:20.925]     occurred
I0521 18:33:20.925] 
I0521 18:33:20.925]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:20.925] ------------------------------
I0521 18:33:20.926] SSSSSSSSSSSSSSSSS
I0521 18:33:20.926] ------------------------------
I0521 18:33:20.926] [sig-storage] Projected secret 
I0521 18:33:20.926]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:20.926]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:419
I0521 18:33:20.926] [BeforeEach] [sig-storage] Projected secret
I0521 18:33:20.926]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:20.926] STEP: Creating a kubernetes client
I0521 18:33:20.926] STEP: Building a namespace api object, basename projected
I0521 18:33:20.927] May 21 17:39:22.090: INFO: Skipping waiting for service account
I0521 18:33:20.927] [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:20.927]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:419
I0521 18:33:20.927] May 21 17:39:22.092: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:20.927] STEP: Creating secret with name s-test-opt-create-ec5e6f1c-ad07-4bcd-8b65-ab7bc27d1039
I0521 18:33:20.927] STEP: Creating the pod
I0521 18:33:20.928] [AfterEach] [sig-storage] Projected secret
I0521 18:33:20.928]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:33:20.928] May 21 17:44:44.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:20.929] May 21 17:44:44.166: INFO: namespace projected-7824 deletion completed in 22.044241805s
I0521 18:33:20.929] 
I0521 18:33:20.929] • [SLOW TEST:322.079 seconds]
I0521 18:33:20.929] [sig-storage] Projected secret
I0521 18:33:20.929] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
I0521 18:33:20.929]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:20.930]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:419
I0521 18:33:20.930] ------------------------------
I0521 18:33:20.930] SS
I0521 18:33:20.930] ------------------------------
I0521 18:33:20.930] [sig-storage] Projected secret 
I0521 18:33:20.930]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:20.931]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:409
I0521 18:33:20.931] [BeforeEach] [sig-storage] Projected secret
I0521 18:33:20.931]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:20.931] STEP: Creating a kubernetes client
I0521 18:33:20.931] STEP: Building a namespace api object, basename projected
I0521 18:33:20.931] May 21 17:44:44.168: INFO: Skipping waiting for service account
I0521 18:33:20.931] [It] Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:20.931]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:409
I0521 18:33:20.931] May 21 17:44:44.170: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:20.932] STEP: Creating the pod
I0521 18:33:20.932] [AfterEach] [sig-storage] Projected secret
I0521 18:33:20.932]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:20.932] May 21 17:49:44.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:20.932] STEP: Destroying namespace "projected-2891" for this suite.
I0521 18:33:20.932] May 21 17:50:06.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:20.932] May 21 17:50:06.234: INFO: namespace projected-2891 deletion completed in 22.045966099s
I0521 18:33:20.932] 
I0521 18:33:20.932] • [SLOW TEST:322.068 seconds]
I0521 18:33:20.933] [sig-storage] Projected secret
I0521 18:33:20.933] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
I0521 18:33:20.933]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:20.933]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:409
I0521 18:33:20.933] ------------------------------
I0521 18:33:20.933] SSSS
I0521 18:33:20.933] ------------------------------
I0521 18:33:20.933] [sig-storage] GCP Volumes GlusterFS 
I0521 18:33:20.933]   should be mountable
... skipping 133 lines ...
I0521 18:33:20.948]   when querying /resource/metrics
I0521 18:33:20.948]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_metrics_test.go:45
I0521 18:33:20.948]     should report resource usage through the v1alpha1 resouce metrics api
I0521 18:33:20.948]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_metrics_test.go:66
I0521 18:33:20.949] ------------------------------
I0521 18:33:20.949] [sig-storage] Projected configMap 
I0521 18:33:20.949]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:20.949]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:501
I0521 18:33:20.949] [BeforeEach] [sig-storage] Projected configMap
I0521 18:33:20.949]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:20.949] STEP: Creating a kubernetes client
I0521 18:33:20.950] STEP: Building a namespace api object, basename projected
I0521 18:33:20.950] May 21 17:52:48.406: INFO: Skipping waiting for service account
I0521 18:33:20.950] [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:20.950]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:501
I0521 18:33:20.950] May 21 17:52:48.407: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:20.950] STEP: Creating configMap with name cm-test-opt-create-694224f2-d12d-41e9-b65e-3cf4617d4f94
I0521 18:33:20.950] STEP: Creating the pod
I0521 18:33:20.950] [AfterEach] [sig-storage] Projected configMap
I0521 18:33:20.951]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:33:20.951] May 21 17:58:10.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:20.951] May 21 17:58:10.480: INFO: namespace projected-8842 deletion completed in 22.046248756s
I0521 18:33:20.951] 
I0521 18:33:20.951] • [SLOW TEST:322.077 seconds]
I0521 18:33:20.951] [sig-storage] Projected configMap
I0521 18:33:20.952] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
I0521 18:33:20.952]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:20.952]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:501
I0521 18:33:20.952] ------------------------------
I0521 18:33:20.952] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0521 18:33:20.952] ------------------------------
I0521 18:33:20.953] [sig-storage] GCP Volumes NFSv4 
I0521 18:33:20.953]   should be mountable for NFSv4
... skipping 71 lines ...
I0521 18:33:20.962] [JustBeforeEach] [sig-node] Node Performance Testing [Serial] [Slow]
I0521 18:33:20.962]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:65
I0521 18:33:20.962] I0521 17:58:22.615469    1304 util.go:221] new configuration has taken effect
I0521 18:33:20.962] [It] NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
I0521 18:33:20.962]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:106
I0521 18:33:20.963] STEP: running the workload and waiting for success
I0521 18:33:20.963] May 21 17:58:24.629: INFO: Unexpected error occurred: pod ran to completion
I0521 18:33:20.963] May 21 17:58:24.637: INFO: Waiting for pod npb-is-pod to disappear
I0521 18:33:20.963] May 21 17:58:24.641: INFO: Pod npb-is-pod no longer exists
I0521 18:33:20.963] STEP: running the post test exec from the workload
I0521 18:33:20.963] I0521 17:58:24.654635    1304 util.go:221] new configuration has taken effect
I0521 18:33:20.963] [AfterEach] [sig-node] Node Performance Testing [Serial] [Slow]
I0521 18:33:20.964]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:20.964] STEP: Collecting events from namespace "node-performance-testing-6506".
I0521 18:33:20.964] STEP: Found 1 events.
I0521 18:33:20.964] May 21 17:58:24.658: INFO: At 2019-05-21 17:58:22 +0000 UTC - event for npb-is-pod: {kubelet tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0} OutOfcpu: Node didn't have enough resource: cpu, requested: 16000, used: 0, capacity: 1000
I0521 18:33:20.964] May 21 17:58:24.659: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0521 18:33:20.965] May 21 17:58:24.659: INFO: 
I0521 18:33:20.965] May 21 17:58:24.661: INFO: 
I0521 18:33:20.965] Logging node info for node tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0
I0521 18:33:20.971] May 21 17:58:24.663: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0,UID:ec6c1eb6-a934-4c93-9e39-20a2153c8ba4,ResourceVersion:1716,Generation:0,CreationTimestamp:2019-05-21 17:23:18 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-dksg8,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16701562880 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885535232 0} {<nil>} 3794468Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15031406568 0} {<nil>} 15031406568 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623391232 0} {<nil>} 3538468Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 17:58:14 +0000 UTC 2019-05-21 17:23:15 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 17:58:14 +0000 UTC 2019-05-21 17:23:15 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 17:58:14 +0000 UTC 2019-05-21 17:23:15 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 17:58:14 +0000 UTC 2019-05-21 17:23:15 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.138.0.44} {Hostname tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:14690504d32589f517351ff7955b0543,SystemUUID:14690504-D325-89F5-1735-1FF7955B0543,BootID:c51344dc-21a5-4a94-97df-ae0963175ad0,KernelVersion:4.4.64+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://1.13.1,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-rtrxc,UID:8d119032-a51a-4e4e-9f9a-6a034f801921,ResourceVersion:1318,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-rtrxc,UID:8d119032-a51a-4e4e-9f9a-6a034f801921,ResourceVersion:1318,KubeletConfigKey:kubelet,},},LastKnownGood:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-rtrxc,UID:8d119032-a51a-4e4e-9f9a-6a034f801921,ResourceVersion:1318,KubeletConfigKey:kubelet,},},Error:,},},}
I0521 18:33:20.971] May 21 17:58:24.663: INFO: 
I0521 18:33:20.971] Logging kubelet events for node tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0
I0521 18:33:20.971] May 21 17:58:24.664: INFO: 
I0521 18:33:20.972] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0
I0521 18:33:20.972] W0521 17:58:24.668272    1304 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:33:20.972] May 21 17:58:24.701: INFO: 
... skipping 12 lines ...
I0521 18:33:20.975] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
I0521 18:33:20.975]   Run node performance testing with pre-defined workloads
I0521 18:33:20.975]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:102
I0521 18:33:20.975]     NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload [It]
I0521 18:33:20.975]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:106
I0521 18:33:20.975] 
I0521 18:33:20.975]     Unexpected error:
I0521 18:33:20.975]         <*errors.errorString | 0xc00054b0c0>: {
I0521 18:33:20.976]             s: "pod ran to completion",
I0521 18:33:20.976]         }
I0521 18:33:20.977]         pod ran to completion
I0521 18:33:20.977]     occurred
I0521 18:33:20.977] 
... skipping 24 lines ...
I0521 18:33:20.980]   should reject a Pod requesting a RuntimeClass with an unconfigured handler
I0521 18:33:20.980]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:52
I0521 18:33:20.980] ------------------------------
I0521 18:33:20.980] SSSSS
I0521 18:33:20.980] ------------------------------
I0521 18:33:20.980] [sig-storage] ConfigMap 
I0521 18:33:20.980]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:20.980]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:560
I0521 18:33:20.980] [BeforeEach] [sig-storage] ConfigMap
I0521 18:33:20.981]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:20.981] STEP: Creating a kubernetes client
I0521 18:33:20.981] STEP: Building a namespace api object, basename configmap
I0521 18:33:20.981] May 21 17:58:54.825: INFO: Skipping waiting for service account
I0521 18:33:20.981] [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:20.981]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:560
I0521 18:33:20.982] May 21 17:58:54.826: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:20.982] STEP: Creating configMap with name cm-test-opt-create-3f161f50-3f87-4129-8da0-14039a757af0
I0521 18:33:20.982] STEP: Creating the pod
I0521 18:33:20.982] [AfterEach] [sig-storage] ConfigMap
I0521 18:33:20.982]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:33:20.983] May 21 18:04:16.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:20.983] May 21 18:04:16.898: INFO: namespace configmap-4661 deletion completed in 22.04949479s
I0521 18:33:20.983] 
I0521 18:33:20.984] • [SLOW TEST:322.076 seconds]
I0521 18:33:20.984] [sig-storage] ConfigMap
I0521 18:33:20.984] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
I0521 18:33:20.984]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:20.984]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:560
I0521 18:33:20.985] ------------------------------
I0521 18:33:20.985] SSSSS
I0521 18:33:20.985] ------------------------------
I0521 18:33:20.985] [k8s.io] NodeLease when the NodeLease feature is enabled 
I0521 18:33:20.985]   the kubelet should report node status infrequently
... skipping 41 lines ...
I0521 18:33:20.993]     the kubelet should report node status infrequently
I0521 18:33:20.994]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:87
I0521 18:33:20.994] ------------------------------
I0521 18:33:20.994] SSSSSS
I0521 18:33:20.994] ------------------------------
I0521 18:33:20.994] [sig-node] ConfigMap 
I0521 18:33:20.995]   should fail to create ConfigMap with empty key [Conformance]
I0521 18:33:20.995]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:20.995] [BeforeEach] [sig-node] ConfigMap
I0521 18:33:20.995]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:20.995] STEP: Creating a kubernetes client
I0521 18:33:20.995] STEP: Building a namespace api object, basename configmap
I0521 18:33:20.996] May 21 18:04:36.960: INFO: Skipping waiting for service account
I0521 18:33:20.996] [It] should fail to create ConfigMap with empty key [Conformance]
I0521 18:33:20.996]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:20.996] STEP: Creating configMap that has name configmap-test-emptyKey-a8ec852c-8112-4b61-8825-f9a3e0897344
I0521 18:33:20.996] [AfterEach] [sig-node] ConfigMap
I0521 18:33:20.996]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:20.997] May 21 18:04:37.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:20.997] STEP: Destroying namespace "configmap-6476" for this suite.
I0521 18:33:20.997] May 21 18:04:43.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:20.997] May 21 18:04:43.059: INFO: namespace configmap-6476 deletion completed in 6.045683969s
I0521 18:33:20.997] 
I0521 18:33:20.997] • [SLOW TEST:6.102 seconds]
I0521 18:33:20.997] [sig-node] ConfigMap
I0521 18:33:20.997] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
I0521 18:33:20.998]   should fail to create ConfigMap with empty key [Conformance]
I0521 18:33:20.998]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:20.998] ------------------------------
I0521 18:33:20.998] SSSSSSSSSSSSS
I0521 18:33:20.998] ------------------------------
I0521 18:33:20.998] [k8s.io] Density [Serial] [Slow] create a batch of pods 
I0521 18:33:20.998]   latency/resource should be within limit when create 10 pods with 0s interval
... skipping 92 lines ...
I0521 18:33:21.010]     latency/resource should be within limit when create 10 pods with 0s interval
I0521 18:33:21.010]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:100
I0521 18:33:21.010] ------------------------------
I0521 18:33:21.010] SSSSSSSSS
I0521 18:33:21.010] ------------------------------
I0521 18:33:21.010] [sig-storage] Secrets 
I0521 18:33:21.010]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:21.010]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:384
I0521 18:33:21.011] [BeforeEach] [sig-storage] Secrets
I0521 18:33:21.011]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:21.011] STEP: Creating a kubernetes client
I0521 18:33:21.011] STEP: Building a namespace api object, basename secrets
I0521 18:33:21.011] May 21 18:06:32.388: INFO: Skipping waiting for service account
I0521 18:33:21.012] [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:21.012]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:384
I0521 18:33:21.012] May 21 18:06:32.390: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:21.012] STEP: Creating secret with name s-test-opt-create-f450651e-d3bd-4c43-aa3a-c64a4df0db5a
I0521 18:33:21.012] STEP: Creating the pod
I0521 18:33:21.012] [AfterEach] [sig-storage] Secrets
I0521 18:33:21.012]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:33:21.013] May 21 18:11:54.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:21.013] May 21 18:11:54.463: INFO: namespace secrets-5709 deletion completed in 22.047923879s
I0521 18:33:21.013] 
I0521 18:33:21.013] • [SLOW TEST:322.077 seconds]
I0521 18:33:21.013] [sig-storage] Secrets
I0521 18:33:21.014] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
I0521 18:33:21.014]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:21.014]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:384
I0521 18:33:21.014] ------------------------------
I0521 18:33:21.014] SSSSSSSSSSSS
I0521 18:33:21.014] ------------------------------
I0521 18:33:21.014] [sig-storage] Secrets 
I0521 18:33:21.014]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:21.014]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:374
I0521 18:33:21.015] [BeforeEach] [sig-storage] Secrets
I0521 18:33:21.015]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:21.015] STEP: Creating a kubernetes client
I0521 18:33:21.015] STEP: Building a namespace api object, basename secrets
I0521 18:33:21.015] May 21 18:11:54.466: INFO: Skipping waiting for service account
I0521 18:33:21.015] [It] Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:21.015]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:374
I0521 18:33:21.016] May 21 18:11:54.467: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:21.016] STEP: Creating the pod
I0521 18:33:21.016] [AfterEach] [sig-storage] Secrets
I0521 18:33:21.016]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:21.016] May 21 18:16:54.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:21.016] STEP: Destroying namespace "secrets-8562" for this suite.
I0521 18:33:21.016] May 21 18:17:16.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:21.016] May 21 18:17:16.530: INFO: namespace secrets-8562 deletion completed in 22.048262322s
I0521 18:33:21.016] 
I0521 18:33:21.016] • [SLOW TEST:322.068 seconds]
I0521 18:33:21.016] [sig-storage] Secrets
I0521 18:33:21.017] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
I0521 18:33:21.017]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:21.017]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:374
I0521 18:33:21.017] ------------------------------
I0521 18:33:21.017] [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking 
I0521 18:33:21.017]   resource tracking for 10 pods per node
I0521 18:33:21.017]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:84
I0521 18:33:21.018] [BeforeEach] [sig-node] Resource-usage [Serial] [Slow]
... skipping 82 lines ...
I0521 18:33:21.030] STEP: Destroying namespace "resource-usage-2015" for this suite.
I0521 18:33:21.030] May 21 18:28:13.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:21.030] May 21 18:28:13.968: INFO: namespace resource-usage-2015 deletion completed in 6.045884374s
I0521 18:33:21.030] [AfterEach] [sig-node] Resource-usage [Serial] [Slow]
I0521 18:33:21.031]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:58
I0521 18:33:21.031] W0521 18:28:13.969956    1304 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:33:21.031] May 21 18:28:14.003: INFO: runtime operation error metrics:
I0521 18:33:21.031] node "tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0" runtime operation error rate:
I0521 18:33:21.031] operation "remove_container": total - 11; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:21.031] operation "version": total - 198; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:21.031] operation "create_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:21.031] operation "info": total - 0; error rate - NaN; timeout rate - NaN
I0521 18:33:21.032] operation "inspect_container": total - 192; error rate - 0.010417; timeout rate - 0.000000
I0521 18:33:21.032] operation "stop_container": total - 41; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:21.032] operation "list_images": total - 91; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:21.032] operation "start_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:21.032] operation "inspect_image": total - 73; error rate - 0.150685; timeout rate - 0.000000
I0521 18:33:21.032] operation "list_containers": total - 2489; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:21.033] 
I0521 18:33:21.033] 
I0521 18:33:21.033] 
I0521 18:33:21.033] • [SLOW TEST:657.473 seconds]
I0521 18:33:21.033] [sig-node] Resource-usage [Serial] [Slow]
I0521 18:33:21.033] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
... skipping 2 lines ...
I0521 18:33:21.034]     resource tracking for 10 pods per node
I0521 18:33:21.034]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:84
I0521 18:33:21.034] ------------------------------
I0521 18:33:21.034] SSSSSSSSSSSSSSSSSSSSSS
I0521 18:33:21.034] ------------------------------
I0521 18:33:21.034] [sig-api-machinery] Secrets 
I0521 18:33:21.035]   should fail to create secret due to empty secret key [Conformance]
I0521 18:33:21.035]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:21.035] [BeforeEach] [sig-api-machinery] Secrets
I0521 18:33:21.035]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:21.035] STEP: Creating a kubernetes client
I0521 18:33:21.035] STEP: Building a namespace api object, basename secrets
I0521 18:33:21.035] May 21 18:28:14.016: INFO: Skipping waiting for service account
I0521 18:33:21.035] [It] should fail to create secret due to empty secret key [Conformance]
I0521 18:33:21.035]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:21.036] STEP: Creating projection with secret that has name secret-emptykey-test-21427b9b-60b7-4cb7-a301-8dc05b9ab8b3
I0521 18:33:21.036] [AfterEach] [sig-api-machinery] Secrets
I0521 18:33:21.036]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:21.036] May 21 18:28:14.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:21.036] STEP: Destroying namespace "secrets-3687" for this suite.
I0521 18:33:21.036] May 21 18:28:20.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:21.036] May 21 18:28:20.078: INFO: namespace secrets-3687 deletion completed in 6.05753162s
I0521 18:33:21.036] 
I0521 18:33:21.036] • [SLOW TEST:6.072 seconds]
I0521 18:33:21.036] [sig-api-machinery] Secrets
I0521 18:33:21.037] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
I0521 18:33:21.037]   should fail to create secret due to empty secret key [Conformance]
I0521 18:33:21.037]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:21.037] ------------------------------
I0521 18:33:21.037] SSSS
I0521 18:33:21.037] ------------------------------
I0521 18:33:21.037] [k8s.io] Probing container 
I0521 18:33:21.037]   should be restarted with a local redirect http liveness probe
... skipping 92 lines ...
I0521 18:33:21.052]   should *not* be restarted with a non-local redirect http liveness probe
I0521 18:33:21.053]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:246
I0521 18:33:21.053] ------------------------------
I0521 18:33:21.053] SSSSI0521 18:33:06.684176    1304 e2e_node_suite_test.go:186] Stopping node services...
I0521 18:33:21.053] I0521 18:33:06.684188    1304 server.go:257] Kill server "services"
I0521 18:33:21.053] I0521 18:33:06.684197    1304 server.go:294] Killing process 1752 (services) with -TERM
I0521 18:33:21.053] E0521 18:33:06.775382    1304 services.go:89] Failed to stop services: error stopping "services": waitid: no child processes
I0521 18:33:21.053] I0521 18:33:06.775403    1304 server.go:257] Kill server "kubelet"
I0521 18:33:21.053] I0521 18:33:06.785102    1304 services.go:148] Fetching log files...
I0521 18:33:21.053] I0521 18:33:06.785175    1304 services.go:157] Get log file "kern.log" with journalctl command [-k].
I0521 18:33:21.054] I0521 18:33:06.874732    1304 services.go:157] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0521 18:33:21.054] I0521 18:33:07.366531    1304 services.go:157] Get log file "docker.log" with journalctl command [-u docker].
I0521 18:33:21.054] I0521 18:33:07.380240    1304 services.go:157] Get log file "kubelet.log" with journalctl command [-u kubelet-20190521T172155.service].
I0521 18:33:21.054] I0521 18:33:09.019054    1304 e2e_node_suite_test.go:191] Tests Finished
I0521 18:33:21.054] 
I0521 18:33:21.054] 
I0521 18:33:21.054] Summarizing 4 Failures:
I0521 18:33:21.054] 
I0521 18:33:21.055] [Fail] [sig-storage] EmptyDir volumes [It] pod should support shared volumes between containers [Conformance] 
I0521 18:33:21.055] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218
I0521 18:33:21.055] 
I0521 18:33:21.055] [Fail] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads [It] NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload 
I0521 18:33:21.055] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:21.055] 
I0521 18:33:21.055] [Fail] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads [It] TensorFlow workload 
I0521 18:33:21.056] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:21.056] 
I0521 18:33:21.056] [Fail] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads [It] NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload 
I0521 18:33:21.056] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:21.056] 
I0521 18:33:21.056] Ran 25 of 303 Specs in 4256.818 seconds
I0521 18:33:21.056] FAIL! -- 21 Passed | 4 Failed | 0 Pending | 278 Skipped
I0521 18:33:21.056] --- FAIL: TestE2eNode (4256.84s)
I0521 18:33:21.056] FAIL
I0521 18:33:21.056] 
I0521 18:33:21.057] Ginkgo ran 1 suite in 1h10m58.731670223s
I0521 18:33:21.057] Test Suite Failed
I0521 18:33:21.057] 
I0521 18:33:21.057] Failure Finished Test Suite on Host tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0
I0521 18:33:21.057] command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.233.134.213 -- sudo sh -c 'cd /tmp/node-e2e-20190521T172155 && timeout -k 30s 18000.000000s ./ginkgo --nodes=1 --skip="\[Flaky\]|\[NodeConformance\]|\[NodeFeature:.+\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]|\[Legacy:.+\]|\[Benchmark\]" ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --logtostderr --v 4 --node-name=tmp-node-e2e-eceb20b6-cos-stable-60-9592-84-0 --report-dir=/tmp/node-e2e-20190521T172155/results --report-prefix=cos-stable2 --image-description="cos-stable-60-9592-84-0" --kubelet-flags=--experimental-mounter-path=/tmp/node-e2e-20190521T172155/mounter --kubelet-flags=--experimental-kernel-memcg-notification=true --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"'] failed with error: exit status 1
I0521 18:33:21.058] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0521 18:33:21.058] <                              FINISH TEST                               <
I0521 18:33:21.058] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0521 18:33:21.058] 
W0521 18:33:21.217] I0521 18:33:21.216984    4243 remote.go:202] Got the system logs from journald; copying it back...
W0521 18:33:22.721] I0521 18:33:22.721039    4243 remote.go:122] Copying test artifacts from "tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113"
W0521 18:33:27.685] I0521 18:33:27.685214    4243 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0521 18:33:29.021] I0521 18:33:29.021357    4243 run_remote.go:718] Deleting instance "tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113"
W0521 18:33:29.061] I0521 18:33:29.060556    4243 remote.go:202] Got the system logs from journald; copying it back...
I0521 18:33:29.449] 
I0521 18:33:29.450] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0521 18:33:29.450] >                              START TEST                                >
I0521 18:33:29.450] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
... skipping 78 lines ...
I0521 18:33:29.460]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:29.460] STEP: Creating Pod
I0521 18:33:29.460] STEP: Waiting for the pod running
I0521 18:33:29.461] STEP: Geting the pod
I0521 18:33:29.461] STEP: Reading file content from the nginx-container
I0521 18:33:29.461] May 21 17:23:33.100: INFO: Running ' --server=http://127.0.0.1:8080 exec pod-sharedvolume-c8eebf4c-1d47-49bb-b8d8-6ffa14e49495 -c busybox-main-container --namespace=emptydir-4968 -- cat /usr/share/volumeshare/shareddata.txt'
I0521 18:33:29.461] May 21 17:23:33.100: INFO: Unexpected error occurred: error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-c8eebf4c-1d47-49bb-b8d8-6ffa14e49495 -c busybox-main-container --namespace=emptydir-4968 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc000a1e268 0xc000a1e280 0xc000a1e298] [0xc000a1e268 0xc000a1e280 0xc000a1e298] [0xc000a1e278 0xc000a1e290] [0xef22d0 0xef22d0] <nil> <nil>}:
I0521 18:33:29.461] Command stdout:
I0521 18:33:29.461] 
I0521 18:33:29.461] stderr:
I0521 18:33:29.461] 
I0521 18:33:29.461] error:
I0521 18:33:29.462] fork/exec : no such file or directory
I0521 18:33:29.462] [AfterEach] [sig-storage] EmptyDir volumes
I0521 18:33:29.462]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:29.462] STEP: Collecting events from namespace "emptydir-4968".
I0521 18:33:29.462] STEP: Found 6 events.
I0521 18:33:29.462] May 21 17:23:33.104: INFO: At 2019-05-21 17:23:30 +0000 UTC - event for pod-sharedvolume-c8eebf4c-1d47-49bb-b8d8-6ffa14e49495: {kubelet tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
... skipping 4 lines ...
I0521 18:33:29.463] May 21 17:23:33.104: INFO: At 2019-05-21 17:23:30 +0000 UTC - event for pod-sharedvolume-c8eebf4c-1d47-49bb-b8d8-6ffa14e49495: {kubelet tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113} Started: Started container busybox-sub-container
I0521 18:33:29.463] May 21 17:23:33.110: INFO: POD                                                    NODE                                                     PHASE    GRACE  CONDITIONS
I0521 18:33:29.464] May 21 17:23:33.110: INFO: pod-sharedvolume-c8eebf4c-1d47-49bb-b8d8-6ffa14e49495  tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-05-21 17:23:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-05-21 17:23:27 +0000 UTC ContainersNotReady containers with unready status: [busybox-sub-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-05-21 17:23:27 +0000 UTC ContainersNotReady containers with unready status: [busybox-sub-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-05-21 17:23:27 +0000 UTC  }]
I0521 18:33:29.464] May 21 17:23:33.110: INFO: 
I0521 18:33:29.464] May 21 17:23:33.114: INFO: 
I0521 18:33:29.464] Logging node info for node tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.468] May 21 17:23:33.116: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113,UID:d859c948-3bf1-4467-a6bf-9f5aeda3d1e3,ResourceVersion:62,Generation:0,CreationTimestamp:2019-05-21 17:23:26 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872571392 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3610427392 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 17:23:26 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 17:23:26 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 17:23:26 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 17:23:26 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.138.0.45} {Hostname tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:932e3eca46a84fa7d0e2cc30a3a3a5ee,SystemUUID:932E3ECA-46A8-4FA7-D0E2-CC30A3A3A5EE,BootID:b91ee0bf-bae4-4f59-9a40-e748898b9a35,KernelVersion:4.15.0-1023-gcp,OSImage:Ubuntu 18.04.1 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[gke-nvidia-installer:fixed] 75}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
I0521 18:33:29.468] May 21 17:23:33.117: INFO: 
I0521 18:33:29.468] Logging kubelet events for node tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.468] May 21 17:23:33.118: INFO: 
I0521 18:33:29.468] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.469] May 21 17:23:33.120: INFO: pod-sharedvolume-c8eebf4c-1d47-49bb-b8d8-6ffa14e49495 started at 2019-05-21 17:23:27 +0000 UTC (0+2 container statuses recorded)
I0521 18:33:29.469] May 21 17:23:33.120: INFO: 	Container busybox-main-container ready: true, restart count 0
... skipping 9 lines ...
I0521 18:33:29.470] • Failure [12.193 seconds]
I0521 18:33:29.470] [sig-storage] EmptyDir volumes
I0521 18:33:29.470] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
I0521 18:33:29.470]   pod should support shared volumes between containers [Conformance] [It]
I0521 18:33:29.470]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:29.470] 
I0521 18:33:29.470]   Unexpected error:
I0521 18:33:29.471]       <*errors.errorString | 0xc0005f0a70>: {
I0521 18:33:29.471]           s: "error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-c8eebf4c-1d47-49bb-b8d8-6ffa14e49495 -c busybox-main-container --namespace=emptydir-4968 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc000a1e268 0xc000a1e280 0xc000a1e298] [0xc000a1e268 0xc000a1e280 0xc000a1e298] [0xc000a1e278 0xc000a1e290] [0xef22d0 0xef22d0] <nil> <nil>}:\nCommand stdout:\n\nstderr:\n\nerror:\nfork/exec : no such file or directory",
I0521 18:33:29.471]       }
I0521 18:33:29.471]       error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-c8eebf4c-1d47-49bb-b8d8-6ffa14e49495 -c busybox-main-container --namespace=emptydir-4968 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc000a1e268 0xc000a1e280 0xc000a1e298] [0xc000a1e268 0xc000a1e280 0xc000a1e298] [0xc000a1e278 0xc000a1e290] [0xef22d0 0xef22d0] <nil> <nil>}:
I0521 18:33:29.471]       Command stdout:
I0521 18:33:29.471]       
I0521 18:33:29.472]       stderr:
I0521 18:33:29.472]       
I0521 18:33:29.472]       error:
I0521 18:33:29.472]       fork/exec : no such file or directory
I0521 18:33:29.472]   occurred
I0521 18:33:29.472] 
I0521 18:33:29.472]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218
I0521 18:33:29.472] ------------------------------
I0521 18:33:29.472] SSSSSSSSSS
... skipping 1370 lines ...
I0521 18:33:29.687] I0521 17:27:47.302838    2433 util.go:44] Running readiness check for service "kubelet"
I0521 18:33:29.687] I0521 17:27:48.304666    2433 server.go:182] Initial health check passed for service "kubelet"
I0521 18:33:29.687] I0521 17:27:48.386600    2433 util.go:221] new configuration has taken effect
I0521 18:33:29.688] [It] NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
I0521 18:33:29.688]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:115
I0521 18:33:29.688] STEP: running the workload and waiting for success
I0521 18:33:29.688] May 21 17:27:50.403: INFO: Unexpected error occurred: pod ran to completion
I0521 18:33:29.688] May 21 17:27:50.415: INFO: Waiting for pod npb-ep-pod to disappear
I0521 18:33:29.688] May 21 17:27:50.418: INFO: Pod npb-ep-pod no longer exists
I0521 18:33:29.688] STEP: running the post test exec from the workload
I0521 18:33:29.688] I0521 17:27:58.319006    2433 server.go:222] Restarting server "kubelet" with restart command
I0521 18:33:29.688] I0521 17:27:58.334645    2433 server.go:171] Running health check for service "kubelet"
I0521 18:33:29.688] I0521 17:27:58.334675    2433 util.go:44] Running readiness check for service "kubelet"
... skipping 5 lines ...
I0521 18:33:29.689] STEP: Found 1 events.
I0521 18:33:29.690] May 21 17:28:00.463: INFO: At 2019-05-21 17:27:48 +0000 UTC - event for npb-ep-pod: {kubelet tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113} OutOfcpu: Node didn't have enough resource: cpu, requested: 15000, used: 0, capacity: 800
I0521 18:33:29.690] May 21 17:28:00.464: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0521 18:33:29.690] May 21 17:28:00.464: INFO: 
I0521 18:33:29.690] May 21 17:28:00.468: INFO: 
I0521 18:33:29.690] Logging node info for node tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.694] May 21 17:28:00.471: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113,UID:d859c948-3bf1-4467-a6bf-9f5aeda3d1e3,ResourceVersion:1093,Generation:0,CreationTimestamp:2019-05-21 17:23:26 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-9jd6d,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872571392 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3610427392 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 17:27:58 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 17:27:58 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 17:27:58 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 17:27:58 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.138.0.45} {Hostname tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:932e3eca46a84fa7d0e2cc30a3a3a5ee,SystemUUID:932E3ECA-46A8-4FA7-D0E2-CC30A3A3A5EE,BootID:b91ee0bf-bae4-4f59-9a40-e748898b9a35,KernelVersion:4.15.0-1023-gcp,OSImage:Ubuntu 18.04.1 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[gke-nvidia-installer:fixed] 75}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-9jd6d,UID:38d06fe3-d8fc-4bd8-bb41-c96a0105bd39,ResourceVersion:1082,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-9jd6d,UID:38d06fe3-d8fc-4bd8-bb41-c96a0105bd39,ResourceVersion:1082,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0521 18:33:29.694] May 21 17:28:00.471: INFO: 
I0521 18:33:29.695] Logging kubelet events for node tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.695] May 21 17:28:00.473: INFO: 
I0521 18:33:29.695] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.695] W0521 17:28:00.479884    2433 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:33:29.695] May 21 17:28:00.501: INFO: 
... skipping 8 lines ...
I0521 18:33:29.696] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
I0521 18:33:29.696]   Run node performance testing with pre-defined workloads
I0521 18:33:29.696]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:111
I0521 18:33:29.696]     NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload [It]
I0521 18:33:29.696]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:115
I0521 18:33:29.696] 
I0521 18:33:29.697]     Unexpected error:
I0521 18:33:29.697]         <*errors.errorString | 0xc000372f70>: {
I0521 18:33:29.697]             s: "pod ran to completion",
I0521 18:33:29.697]         }
I0521 18:33:29.697]         pod ran to completion
I0521 18:33:29.697]     occurred
I0521 18:33:29.697] 
I0521 18:33:29.697]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:29.697] ------------------------------
I0521 18:33:29.697] SSSSS
I0521 18:33:29.697] ------------------------------
I0521 18:33:29.697] [sig-storage] ConfigMap 
I0521 18:33:29.698]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:29.698]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:550
I0521 18:33:29.698] [BeforeEach] [sig-storage] ConfigMap
I0521 18:33:29.698]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:29.698] STEP: Creating a kubernetes client
I0521 18:33:29.698] STEP: Building a namespace api object, basename configmap
I0521 18:33:29.698] May 21 17:28:06.563: INFO: Skipping waiting for service account
I0521 18:33:29.698] [It] Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:29.699]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:550
I0521 18:33:29.699] May 21 17:28:06.565: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:29.699] STEP: Creating the pod
I0521 18:33:29.699] [AfterEach] [sig-storage] ConfigMap
I0521 18:33:29.699]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:29.699] May 21 17:33:06.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:29.699] STEP: Destroying namespace "configmap-7945" for this suite.
I0521 18:33:29.699] May 21 17:33:28.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:29.699] May 21 17:33:28.635: INFO: namespace configmap-7945 deletion completed in 22.048061581s
I0521 18:33:29.700] 
I0521 18:33:29.700] • [SLOW TEST:322.077 seconds]
I0521 18:33:29.700] [sig-storage] ConfigMap
I0521 18:33:29.700] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
I0521 18:33:29.700]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:29.700]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:550
I0521 18:33:29.700] ------------------------------
I0521 18:33:29.700] [sig-node] RuntimeClass 
I0521 18:33:29.700]   should reject a Pod requesting a non-existent RuntimeClass
I0521 18:33:29.700]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:46
I0521 18:33:29.701] [BeforeEach] [sig-node] RuntimeClass
... skipping 16 lines ...
I0521 18:33:29.702]   should reject a Pod requesting a non-existent RuntimeClass
I0521 18:33:29.702]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:46
I0521 18:33:29.702] ------------------------------
I0521 18:33:29.703] SSSSSS
I0521 18:33:29.703] ------------------------------
I0521 18:33:29.703] [sig-storage] Projected configMap 
I0521 18:33:29.703]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:29.703]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:491
I0521 18:33:29.703] [BeforeEach] [sig-storage] Projected configMap
I0521 18:33:29.703]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:29.703] STEP: Creating a kubernetes client
I0521 18:33:29.703] STEP: Building a namespace api object, basename projected
I0521 18:33:29.703] May 21 17:33:52.719: INFO: Skipping waiting for service account
I0521 18:33:29.704] [It] Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:29.704]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:491
I0521 18:33:29.704] May 21 17:33:52.722: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:29.704] STEP: Creating the pod
I0521 18:33:29.704] [AfterEach] [sig-storage] Projected configMap
I0521 18:33:29.704]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:29.704] May 21 17:38:52.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:29.704] STEP: Destroying namespace "projected-8223" for this suite.
I0521 18:33:29.704] May 21 17:39:14.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:29.705] May 21 17:39:14.801: INFO: namespace projected-8223 deletion completed in 22.05477138s
I0521 18:33:29.705] 
I0521 18:33:29.705] • [SLOW TEST:322.087 seconds]
I0521 18:33:29.705] [sig-storage] Projected configMap
I0521 18:33:29.705] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
I0521 18:33:29.705]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:29.705]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:491
I0521 18:33:29.705] ------------------------------
I0521 18:33:29.706] SSSSSSSS
I0521 18:33:29.706] ------------------------------
I0521 18:33:29.706] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads 
I0521 18:33:29.706]   TensorFlow workload
... skipping 12 lines ...
I0521 18:33:29.707] I0521 17:39:19.352308    2433 util.go:44] Running readiness check for service "kubelet"
I0521 18:33:29.707] I0521 17:39:19.866949    2433 util.go:221] new configuration has taken effect
I0521 18:33:29.708] [It] TensorFlow workload
I0521 18:33:29.708]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:124
I0521 18:33:29.708] STEP: running the workload and waiting for success
I0521 18:33:29.708] I0521 17:39:20.354239    2433 server.go:182] Initial health check passed for service "kubelet"
I0521 18:33:29.708] May 21 17:39:21.894: INFO: Unexpected error occurred: pod ran to completion
I0521 18:33:29.708] May 21 17:39:21.904: INFO: Waiting for pod tensorflow-wide-deep-pod to disappear
I0521 18:33:29.708] May 21 17:39:21.906: INFO: Pod tensorflow-wide-deep-pod no longer exists
I0521 18:33:29.708] STEP: running the post test exec from the workload
I0521 18:33:29.708] I0521 17:39:31.368231    2433 server.go:222] Restarting server "kubelet" with restart command
I0521 18:33:29.708] I0521 17:39:31.383444    2433 server.go:171] Running health check for service "kubelet"
I0521 18:33:29.709] I0521 17:39:31.383468    2433 util.go:44] Running readiness check for service "kubelet"
... skipping 4 lines ...
I0521 18:33:29.709] STEP: Found 1 events.
I0521 18:33:29.709] May 21 17:39:31.946: INFO: At 2019-05-21 17:39:20 +0000 UTC - event for tensorflow-wide-deep-pod: {kubelet tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113} OutOfcpu: Node didn't have enough resource: cpu, requested: 15000, used: 0, capacity: 800
I0521 18:33:29.709] May 21 17:39:31.948: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0521 18:33:29.710] May 21 17:39:31.948: INFO: 
I0521 18:33:29.710] May 21 17:39:31.951: INFO: 
I0521 18:33:29.710] Logging node info for node tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.714] May 21 17:39:31.952: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113,UID:d859c948-3bf1-4467-a6bf-9f5aeda3d1e3,ResourceVersion:1331,Generation:0,CreationTimestamp:2019-05-21 17:23:26 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-t42fq,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872571392 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3610427392 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 17:39:31 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 17:39:31 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 17:39:31 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 17:39:31 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.138.0.45} {Hostname tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:932e3eca46a84fa7d0e2cc30a3a3a5ee,SystemUUID:932E3ECA-46A8-4FA7-D0E2-CC30A3A3A5EE,BootID:b91ee0bf-bae4-4f59-9a40-e748898b9a35,KernelVersion:4.15.0-1023-gcp,OSImage:Ubuntu 18.04.1 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[gke-nvidia-installer:fixed] 75}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-t42fq,UID:cf630d1c-5827-4e65-88a4-2b52ad33e8f4,ResourceVersion:1319,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-t42fq,UID:cf630d1c-5827-4e65-88a4-2b52ad33e8f4,ResourceVersion:1319,KubeletConfigKey:kubelet,},},LastKnownGood:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-9jd6d,UID:38d06fe3-d8fc-4bd8-bb41-c96a0105bd39,ResourceVersion:1082,KubeletConfigKey:kubelet,},},Error:,},},}
I0521 18:33:29.714] May 21 17:39:31.952: INFO: 
I0521 18:33:29.714] Logging kubelet events for node tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.714] May 21 17:39:31.954: INFO: 
I0521 18:33:29.714] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.715] W0521 17:39:31.959066    2433 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:33:29.715] May 21 17:39:31.973: INFO: 
... skipping 9 lines ...
I0521 18:33:29.716] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
I0521 18:33:29.716]   Run node performance testing with pre-defined workloads
I0521 18:33:29.716]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:120
I0521 18:33:29.716]     TensorFlow workload [It]
I0521 18:33:29.716]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:124
I0521 18:33:29.716] 
I0521 18:33:29.716]     Unexpected error:
I0521 18:33:29.716]         <*errors.errorString | 0xc000372f70>: {
I0521 18:33:29.716]             s: "pod ran to completion",
I0521 18:33:29.716]         }
I0521 18:33:29.717]         pod ran to completion
I0521 18:33:29.717]     occurred
I0521 18:33:29.717] 
I0521 18:33:29.717]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:29.717] ------------------------------
I0521 18:33:29.717] SSSSSSSSSSSSSSSSS
I0521 18:33:29.717] ------------------------------
I0521 18:33:29.717] [sig-storage] Projected secret 
I0521 18:33:29.717]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:29.718]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:419
I0521 18:33:29.718] [BeforeEach] [sig-storage] Projected secret
I0521 18:33:29.718]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:29.718] STEP: Creating a kubernetes client
I0521 18:33:29.718] STEP: Building a namespace api object, basename projected
I0521 18:33:29.718] May 21 17:39:38.030: INFO: Skipping waiting for service account
I0521 18:33:29.718] [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:29.718]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:419
I0521 18:33:29.718] May 21 17:39:38.037: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:29.718] STEP: Creating secret with name s-test-opt-create-255febb2-f222-4f90-8af4-e9ad7722ef5b
I0521 18:33:29.719] STEP: Creating the pod
I0521 18:33:29.719] [AfterEach] [sig-storage] Projected secret
I0521 18:33:29.719]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:33:29.719] May 21 17:45:00.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:29.719] May 21 17:45:00.114: INFO: namespace projected-9361 deletion completed in 22.050947122s
I0521 18:33:29.719] 
I0521 18:33:29.719] • [SLOW TEST:322.088 seconds]
I0521 18:33:29.719] [sig-storage] Projected secret
I0521 18:33:29.720] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
I0521 18:33:29.720]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:29.720]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:419
I0521 18:33:29.720] ------------------------------
I0521 18:33:29.720] SS
I0521 18:33:29.720] ------------------------------
I0521 18:33:29.720] [sig-storage] Projected secret 
I0521 18:33:29.720]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:29.720]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:409
I0521 18:33:29.720] [BeforeEach] [sig-storage] Projected secret
I0521 18:33:29.721]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:29.721] STEP: Creating a kubernetes client
I0521 18:33:29.721] STEP: Building a namespace api object, basename projected
I0521 18:33:29.721] May 21 17:45:00.119: INFO: Skipping waiting for service account
I0521 18:33:29.721] [It] Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:29.721]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:409
I0521 18:33:29.721] May 21 17:45:00.122: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:29.721] STEP: Creating the pod
I0521 18:33:29.721] [AfterEach] [sig-storage] Projected secret
I0521 18:33:29.722]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:29.722] May 21 17:50:00.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:29.722] STEP: Destroying namespace "projected-5728" for this suite.
I0521 18:33:29.722] May 21 17:50:22.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:29.722] May 21 17:50:22.213: INFO: namespace projected-5728 deletion completed in 22.061013877s
I0521 18:33:29.722] 
I0521 18:33:29.722] • [SLOW TEST:322.099 seconds]
I0521 18:33:29.722] [sig-storage] Projected secret
I0521 18:33:29.722] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
I0521 18:33:29.722]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:29.723]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:409
I0521 18:33:29.723] ------------------------------
I0521 18:33:29.723] SSSS
I0521 18:33:29.723] ------------------------------
I0521 18:33:29.723] [sig-storage] GCP Volumes GlusterFS 
I0521 18:33:29.723]   should be mountable
... skipping 129 lines ...
I0521 18:33:29.737]   when querying /resource/metrics
I0521 18:33:29.737]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_metrics_test.go:45
I0521 18:33:29.737]     should report resource usage through the v1alpha1 resouce metrics api
I0521 18:33:29.737]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_metrics_test.go:66
I0521 18:33:29.737] ------------------------------
I0521 18:33:29.737] [sig-storage] Projected configMap 
I0521 18:33:29.737]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:29.737]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:501
I0521 18:33:29.738] [BeforeEach] [sig-storage] Projected configMap
I0521 18:33:29.738]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:29.738] STEP: Creating a kubernetes client
I0521 18:33:29.738] STEP: Building a namespace api object, basename projected
I0521 18:33:29.738] May 21 17:53:00.471: INFO: Skipping waiting for service account
I0521 18:33:29.738] [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:29.738]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:501
I0521 18:33:29.738] May 21 17:53:00.473: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:29.739] STEP: Creating configMap with name cm-test-opt-create-521ed74e-d9e8-41e6-bf81-58567f337b6e
I0521 18:33:29.739] STEP: Creating the pod
I0521 18:33:29.739] [AfterEach] [sig-storage] Projected configMap
I0521 18:33:29.739]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:33:29.739] May 21 17:58:22.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:29.739] May 21 17:58:22.548: INFO: namespace projected-2203 deletion completed in 22.047326591s
I0521 18:33:29.739] 
I0521 18:33:29.739] • [SLOW TEST:322.081 seconds]
I0521 18:33:29.740] [sig-storage] Projected configMap
I0521 18:33:29.740] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
I0521 18:33:29.740]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:29.740]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:501
I0521 18:33:29.740] ------------------------------
I0521 18:33:29.740] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0521 18:33:29.740] ------------------------------
I0521 18:33:29.740] [sig-storage] GCP Volumes NFSv4 
I0521 18:33:29.741]   should be mountable for NFSv4
... skipping 71 lines ...
I0521 18:33:29.748] [JustBeforeEach] [sig-node] Node Performance Testing [Serial] [Slow]
I0521 18:33:29.748]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:65
I0521 18:33:29.748] I0521 17:58:34.687186    2433 util.go:221] new configuration has taken effect
I0521 18:33:29.748] [It] NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
I0521 18:33:29.748]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:106
I0521 18:33:29.749] STEP: running the workload and waiting for success
I0521 18:33:29.749] May 21 17:58:36.703: INFO: Unexpected error occurred: pod ran to completion
I0521 18:33:29.749] May 21 17:58:36.712: INFO: Waiting for pod npb-is-pod to disappear
I0521 18:33:29.749] May 21 17:58:36.714: INFO: Pod npb-is-pod no longer exists
I0521 18:33:29.749] STEP: running the post test exec from the workload
I0521 18:33:29.749] I0521 17:58:36.733297    2433 util.go:221] new configuration has taken effect
I0521 18:33:29.749] [AfterEach] [sig-node] Node Performance Testing [Serial] [Slow]
I0521 18:33:29.749]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:29.749] STEP: Collecting events from namespace "node-performance-testing-4645".
I0521 18:33:29.749] STEP: Found 1 events.
I0521 18:33:29.750] May 21 17:58:36.736: INFO: At 2019-05-21 17:58:34 +0000 UTC - event for npb-is-pod: {kubelet tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113} OutOfcpu: Node didn't have enough resource: cpu, requested: 16000, used: 0, capacity: 1000
I0521 18:33:29.750] May 21 17:58:36.737: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0521 18:33:29.750] May 21 17:58:36.737: INFO: 
I0521 18:33:29.750] May 21 17:58:36.740: INFO: 
I0521 18:33:29.750] Logging node info for node tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.754] May 21 17:58:36.742: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113,UID:d859c948-3bf1-4467-a6bf-9f5aeda3d1e3,ResourceVersion:1715,Generation:0,CreationTimestamp:2019-05-21 17:23:26 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-4lmbv,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872571392 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3610427392 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 17:58:33 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 17:58:33 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 17:58:33 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 17:58:33 +0000 UTC 2019-05-21 17:23:22 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.138.0.45} {Hostname tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:932e3eca46a84fa7d0e2cc30a3a3a5ee,SystemUUID:932E3ECA-46A8-4FA7-D0E2-CC30A3A3A5EE,BootID:b91ee0bf-bae4-4f59-9a40-e748898b9a35,KernelVersion:4.15.0-1023-gcp,OSImage:Ubuntu 18.04.1 LTS,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[gke-nvidia-installer:fixed] 75}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-t42fq,UID:cf630d1c-5827-4e65-88a4-2b52ad33e8f4,ResourceVersion:1319,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-t42fq,UID:cf630d1c-5827-4e65-88a4-2b52ad33e8f4,ResourceVersion:1319,KubeletConfigKey:kubelet,},},LastKnownGood:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-t42fq,UID:cf630d1c-5827-4e65-88a4-2b52ad33e8f4,ResourceVersion:1319,KubeletConfigKey:kubelet,},},Error:,},},}
I0521 18:33:29.754] May 21 17:58:36.742: INFO: 
I0521 18:33:29.754] Logging kubelet events for node tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.754] May 21 17:58:36.743: INFO: 
I0521 18:33:29.755] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.755] W0521 17:58:36.746581    2433 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:33:29.755] May 21 17:58:36.780: INFO: 
... skipping 8 lines ...
I0521 18:33:29.756] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
I0521 18:33:29.756]   Run node performance testing with pre-defined workloads
I0521 18:33:29.756]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:102
I0521 18:33:29.756]     NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload [It]
I0521 18:33:29.756]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:106
I0521 18:33:29.756] 
I0521 18:33:29.756]     Unexpected error:
I0521 18:33:29.756]         <*errors.errorString | 0xc000372f70>: {
I0521 18:33:29.756]             s: "pod ran to completion",
I0521 18:33:29.757]         }
I0521 18:33:29.757]         pod ran to completion
I0521 18:33:29.757]     occurred
I0521 18:33:29.757] 
... skipping 28 lines ...
I0521 18:33:29.760]   should reject a Pod requesting a RuntimeClass with an unconfigured handler
I0521 18:33:29.760]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:52
I0521 18:33:29.760] ------------------------------
I0521 18:33:29.760] SSSSS
I0521 18:33:29.760] ------------------------------
I0521 18:33:29.760] [sig-storage] ConfigMap 
I0521 18:33:29.760]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:29.760]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:560
I0521 18:33:29.761] [BeforeEach] [sig-storage] ConfigMap
I0521 18:33:29.761]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:29.761] STEP: Creating a kubernetes client
I0521 18:33:29.761] STEP: Building a namespace api object, basename configmap
I0521 18:33:29.761] May 21 17:59:06.902: INFO: Skipping waiting for service account
I0521 18:33:29.761] [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:29.761]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:560
I0521 18:33:29.761] May 21 17:59:06.904: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:29.761] STEP: Creating configMap with name cm-test-opt-create-c08a45ed-0cb3-4ea2-bf4e-e23316cda4ba
I0521 18:33:29.762] STEP: Creating the pod
I0521 18:33:29.762] [AfterEach] [sig-storage] ConfigMap
I0521 18:33:29.762]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:33:29.762] May 21 18:04:28.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:29.762] May 21 18:04:28.982: INFO: namespace configmap-8586 deletion completed in 22.052528755s
I0521 18:33:29.762] 
I0521 18:33:29.762] • [SLOW TEST:322.085 seconds]
I0521 18:33:29.762] [sig-storage] ConfigMap
I0521 18:33:29.762] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
I0521 18:33:29.763]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:29.763]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:560
I0521 18:33:29.763] ------------------------------
I0521 18:33:29.763] SSSSS
I0521 18:33:29.763] ------------------------------
I0521 18:33:29.763] [k8s.io] NodeLease when the NodeLease feature is enabled 
I0521 18:33:29.763]   the kubelet should report node status infrequently
... skipping 43 lines ...
I0521 18:33:29.768]     the kubelet should report node status infrequently
I0521 18:33:29.768]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:87
I0521 18:33:29.768] ------------------------------
I0521 18:33:29.769] SSSSSS
I0521 18:33:29.769] ------------------------------
I0521 18:33:29.769] [sig-node] ConfigMap 
I0521 18:33:29.769]   should fail to create ConfigMap with empty key [Conformance]
I0521 18:33:29.769]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:29.769] [BeforeEach] [sig-node] ConfigMap
I0521 18:33:29.769]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:29.769] STEP: Creating a kubernetes client
I0521 18:33:29.769] STEP: Building a namespace api object, basename configmap
I0521 18:33:29.770] May 21 18:04:51.054: INFO: Skipping waiting for service account
I0521 18:33:29.770] [It] should fail to create ConfigMap with empty key [Conformance]
I0521 18:33:29.770]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:29.770] STEP: Creating configMap that has name configmap-test-emptyKey-89f4fe19-5e29-434a-b693-598071a613b2
I0521 18:33:29.770] [AfterEach] [sig-node] ConfigMap
I0521 18:33:29.770]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:29.770] May 21 18:04:51.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:29.770] STEP: Destroying namespace "configmap-196" for this suite.
I0521 18:33:29.770] May 21 18:04:57.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:29.771] May 21 18:04:57.165: INFO: namespace configmap-196 deletion completed in 6.0566636s
I0521 18:33:29.771] 
I0521 18:33:29.771] • [SLOW TEST:6.117 seconds]
I0521 18:33:29.771] [sig-node] ConfigMap
I0521 18:33:29.771] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
I0521 18:33:29.771]   should fail to create ConfigMap with empty key [Conformance]
I0521 18:33:29.771]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:29.771] ------------------------------
I0521 18:33:29.771] SSSSSSSSSSSSS
I0521 18:33:29.771] ------------------------------
I0521 18:33:29.772] [k8s.io] Density [Serial] [Slow] create a batch of pods 
I0521 18:33:29.772]   latency/resource should be within limit when create 10 pods with 0s interval
... skipping 92 lines ...
I0521 18:33:29.783]     latency/resource should be within limit when create 10 pods with 0s interval
I0521 18:33:29.783]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:100
I0521 18:33:29.783] ------------------------------
I0521 18:33:29.783] SSSSSSSSS
I0521 18:33:29.783] ------------------------------
I0521 18:33:29.783] [sig-storage] Secrets 
I0521 18:33:29.783]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:29.783]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:384
I0521 18:33:29.783] [BeforeEach] [sig-storage] Secrets
I0521 18:33:29.784]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:29.784] STEP: Creating a kubernetes client
I0521 18:33:29.784] STEP: Building a namespace api object, basename secrets
I0521 18:33:29.784] May 21 18:06:46.484: INFO: Skipping waiting for service account
I0521 18:33:29.784] [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:29.784]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:384
I0521 18:33:29.784] May 21 18:06:46.486: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:29.784] STEP: Creating secret with name s-test-opt-create-f4f19abe-5635-4c55-9da5-85496fbf2ddb
I0521 18:33:29.784] STEP: Creating the pod
I0521 18:33:29.784] [AfterEach] [sig-storage] Secrets
I0521 18:33:29.785]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:33:29.785] May 21 18:12:08.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:29.785] May 21 18:12:08.567: INFO: namespace secrets-9250 deletion completed in 22.053940521s
I0521 18:33:29.785] 
I0521 18:33:29.785] • [SLOW TEST:322.087 seconds]
I0521 18:33:29.785] [sig-storage] Secrets
I0521 18:33:29.785] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
I0521 18:33:29.786]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:29.786]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:384
I0521 18:33:29.786] ------------------------------
I0521 18:33:29.786] SSSSSSSSSSSS
I0521 18:33:29.786] ------------------------------
I0521 18:33:29.786] [sig-storage] Secrets 
I0521 18:33:29.786]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:29.786]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:374
I0521 18:33:29.786] [BeforeEach] [sig-storage] Secrets
I0521 18:33:29.787]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:29.787] STEP: Creating a kubernetes client
I0521 18:33:29.787] STEP: Building a namespace api object, basename secrets
I0521 18:33:29.787] May 21 18:12:08.571: INFO: Skipping waiting for service account
I0521 18:33:29.787] [It] Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:29.787]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:374
I0521 18:33:29.787] May 21 18:12:08.573: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:29.787] STEP: Creating the pod
I0521 18:33:29.787] [AfterEach] [sig-storage] Secrets
I0521 18:33:29.788]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:29.788] May 21 18:17:08.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:29.788] STEP: Destroying namespace "secrets-7978" for this suite.
I0521 18:33:29.788] May 21 18:17:30.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:29.788] May 21 18:17:30.651: INFO: namespace secrets-7978 deletion completed in 22.051352046s
I0521 18:33:29.788] 
I0521 18:33:29.788] • [SLOW TEST:322.084 seconds]
I0521 18:33:29.788] [sig-storage] Secrets
I0521 18:33:29.788] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
I0521 18:33:29.789]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:29.789]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:374
I0521 18:33:29.789] ------------------------------
I0521 18:33:29.789] [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking 
I0521 18:33:29.789]   resource tracking for 10 pods per node
I0521 18:33:29.789]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:84
I0521 18:33:29.789] [BeforeEach] [sig-node] Resource-usage [Serial] [Slow]
... skipping 82 lines ...
I0521 18:33:29.798] STEP: Destroying namespace "resource-usage-7534" for this suite.
I0521 18:33:29.798] May 21 18:28:28.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:29.798] May 21 18:28:28.094: INFO: namespace resource-usage-7534 deletion completed in 6.046361181s
I0521 18:33:29.798] [AfterEach] [sig-node] Resource-usage [Serial] [Slow]
I0521 18:33:29.798]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:58
I0521 18:33:29.799] W0521 18:28:28.096286    2433 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:33:29.799] May 21 18:28:28.118: INFO: runtime operation error metrics:
I0521 18:33:29.799] node "tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113" runtime operation error rate:
I0521 18:33:29.799] operation "inspect_image": total - 77; error rate - 0.142857; timeout rate - 0.000000
I0521 18:33:29.799] operation "start_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:29.799] operation "info": total - 0; error rate - NaN; timeout rate - NaN
I0521 18:33:29.799] operation "inspect_container": total - 205; error rate - 0.004878; timeout rate - 0.000000
I0521 18:33:29.799] operation "list_containers": total - 2505; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:29.799] operation "stop_container": total - 47; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:29.800] operation "create_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:29.800] operation "list_images": total - 91; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:29.800] operation "remove_container": total - 11; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:29.800] operation "version": total - 198; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:29.800] 
I0521 18:33:29.800] 
I0521 18:33:29.800] 
I0521 18:33:29.800] • [SLOW TEST:657.468 seconds]
I0521 18:33:29.800] [sig-node] Resource-usage [Serial] [Slow]
I0521 18:33:29.800] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
... skipping 2 lines ...
I0521 18:33:29.801]     resource tracking for 10 pods per node
I0521 18:33:29.801]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:84
I0521 18:33:29.801] ------------------------------
I0521 18:33:29.801] SSSSSSSSSSSSSSSSSSSSSS
I0521 18:33:29.801] ------------------------------
I0521 18:33:29.801] [sig-api-machinery] Secrets 
I0521 18:33:29.801]   should fail to create secret due to empty secret key [Conformance]
I0521 18:33:29.801]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:29.801] [BeforeEach] [sig-api-machinery] Secrets
I0521 18:33:29.802]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:29.802] STEP: Creating a kubernetes client
I0521 18:33:29.802] STEP: Building a namespace api object, basename secrets
I0521 18:33:29.802] May 21 18:28:28.124: INFO: Skipping waiting for service account
I0521 18:33:29.802] [It] should fail to create secret due to empty secret key [Conformance]
I0521 18:33:29.802]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:29.802] STEP: Creating projection with secret that has name secret-emptykey-test-261c0af3-9f5a-4398-bca7-27664c029ae7
I0521 18:33:29.802] [AfterEach] [sig-api-machinery] Secrets
I0521 18:33:29.802]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:29.802] May 21 18:28:28.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:29.803] STEP: Destroying namespace "secrets-3960" for this suite.
I0521 18:33:29.803] May 21 18:28:34.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:29.803] May 21 18:28:34.196: INFO: namespace secrets-3960 deletion completed in 6.068801237s
I0521 18:33:29.803] 
I0521 18:33:29.803] • [SLOW TEST:6.076 seconds]
I0521 18:33:29.803] [sig-api-machinery] Secrets
I0521 18:33:29.803] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
I0521 18:33:29.803]   should fail to create secret due to empty secret key [Conformance]
I0521 18:33:29.803]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:29.803] ------------------------------
I0521 18:33:29.804] SSSS
I0521 18:33:29.804] ------------------------------
I0521 18:33:29.804] [k8s.io] Probing container 
I0521 18:33:29.804]   should be restarted with a local redirect http liveness probe
... skipping 103 lines ...
I0521 18:33:29.815] I0521 18:33:18.918903    2433 services.go:157] Get log file "kubelet.log" with journalctl command [-u kubelet-20190521T172155.service].
I0521 18:33:29.815] I0521 18:33:19.977354    2433 e2e_node_suite_test.go:191] Tests Finished
I0521 18:33:29.815] 
I0521 18:33:29.815] 
I0521 18:33:29.815] Summarizing 4 Failures:
I0521 18:33:29.815] 
I0521 18:33:29.815] [Fail] [sig-storage] EmptyDir volumes [It] pod should support shared volumes between containers [Conformance] 
I0521 18:33:29.815] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218
I0521 18:33:29.815] 
I0521 18:33:29.816] [Fail] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads [It] NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload 
I0521 18:33:29.816] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:29.816] 
I0521 18:33:29.816] [Fail] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads [It] TensorFlow workload 
I0521 18:33:29.816] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:29.816] 
I0521 18:33:29.816] [Fail] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads [It] NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload 
I0521 18:33:29.817] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:29.817] 
I0521 18:33:29.817] Ran 25 of 303 Specs in 4268.813 seconds
I0521 18:33:29.817] FAIL! -- 21 Passed | 4 Failed | 0 Pending | 278 Skipped
I0521 18:33:29.817] --- FAIL: TestE2eNode (4268.84s)
I0521 18:33:29.817] FAIL
I0521 18:33:29.817] 
I0521 18:33:29.817] Ginkgo ran 1 suite in 1h11m9.109936247s
I0521 18:33:29.817] Test Suite Failed
I0521 18:33:29.817] 
I0521 18:33:29.817] Failure Finished Test Suite on Host tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113
I0521 18:33:29.818] command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.247.11.21 -- sudo sh -c 'cd /tmp/node-e2e-20190521T172155 && timeout -k 30s 18000.000000s ./ginkgo --nodes=1 --skip="\[Flaky\]|\[NodeConformance\]|\[NodeFeature:.+\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]|\[Legacy:.+\]|\[Benchmark\]" ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --logtostderr --v 4 --node-name=tmp-node-e2e-eceb20b6-ubuntu-gke-1804-d1703-0-v20181113 --report-dir=/tmp/node-e2e-20190521T172155/results --report-prefix=ubuntu --image-description="ubuntu-gke-1804-d1703-0-v20181113" --kubelet-flags=--experimental-kernel-memcg-notification=true --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"'] failed with error: exit status 1
I0521 18:33:29.818] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0521 18:33:29.818] <                              FINISH TEST                               <
I0521 18:33:29.818] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0521 18:33:29.818] 
W0521 18:33:30.503] I0521 18:33:30.502714    4243 remote.go:122] Copying test artifacts from "tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0"
W0521 18:33:36.502] I0521 18:33:36.501903    4243 run_remote.go:718] Deleting instance "tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0"
... skipping 82 lines ...
I0521 18:33:36.978]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:36.978] STEP: Creating Pod
I0521 18:33:36.978] STEP: Waiting for the pod running
I0521 18:33:36.978] STEP: Geting the pod
I0521 18:33:36.978] STEP: Reading file content from the nginx-container
I0521 18:33:36.979] May 21 17:23:28.373: INFO: Running ' --server=http://127.0.0.1:8080 exec pod-sharedvolume-7329aba9-e689-42f8-b869-2bae0556da4e -c busybox-main-container --namespace=emptydir-5070 -- cat /usr/share/volumeshare/shareddata.txt'
I0521 18:33:36.979] May 21 17:23:28.373: INFO: Unexpected error occurred: error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-7329aba9-e689-42f8-b869-2bae0556da4e -c busybox-main-container --namespace=emptydir-5070 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc00080f790 0xc00080f7a8 0xc00080f7c0] [0xc00080f790 0xc00080f7a8 0xc00080f7c0] [0xc00080f7a0 0xc00080f7b8] [0xef22d0 0xef22d0] <nil> <nil>}:
I0521 18:33:36.979] Command stdout:
I0521 18:33:36.979] 
I0521 18:33:36.980] stderr:
I0521 18:33:36.980] 
I0521 18:33:36.980] error:
I0521 18:33:36.980] fork/exec : no such file or directory
I0521 18:33:36.980] [AfterEach] [sig-storage] EmptyDir volumes
I0521 18:33:36.980]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:36.981] STEP: Collecting events from namespace "emptydir-5070".
I0521 18:33:36.981] STEP: Found 3 events.
I0521 18:33:36.981] May 21 17:23:28.381: INFO: At 2019-05-21 17:23:25 +0000 UTC - event for pod-sharedvolume-7329aba9-e689-42f8-b869-2bae0556da4e: {kubelet tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
I0521 18:33:36.981] May 21 17:23:28.381: INFO: At 2019-05-21 17:23:25 +0000 UTC - event for pod-sharedvolume-7329aba9-e689-42f8-b869-2bae0556da4e: {kubelet tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0} Created: Created container busybox-main-container
I0521 18:33:36.982] May 21 17:23:28.381: INFO: At 2019-05-21 17:23:25 +0000 UTC - event for pod-sharedvolume-7329aba9-e689-42f8-b869-2bae0556da4e: {kubelet tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0} Started: Started container busybox-main-container
I0521 18:33:36.982] May 21 17:23:28.389: INFO: POD                                                    NODE                                            PHASE    GRACE  CONDITIONS
I0521 18:33:36.983] May 21 17:23:28.389: INFO: pod-sharedvolume-7329aba9-e689-42f8-b869-2bae0556da4e  tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-05-21 17:23:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-05-21 17:23:24 +0000 UTC ContainersNotReady containers with unready status: [busybox-sub-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-05-21 17:23:24 +0000 UTC ContainersNotReady containers with unready status: [busybox-sub-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-05-21 17:23:24 +0000 UTC  }]
I0521 18:33:36.983] May 21 17:23:28.389: INFO: 
I0521 18:33:36.983] May 21 17:23:28.394: INFO: 
I0521 18:33:36.983] Logging node info for node tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:36.988] May 21 17:23:28.397: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0,UID:7c8f4774-1cb8-4987-b9fc-bab73af8c4ea,ResourceVersion:72,Generation:0,CreationTimestamp:2019-05-21 17:23:23 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885465600 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623321600 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 17:23:23 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 17:23:23 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 17:23:23 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 17:23:23 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.138.0.46} {Hostname tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09422cd5341503b105dfdfd50323167b,SystemUUID:09422CD5-3415-03B1-05DF-DFD50323167B,BootID:c9a0319a-c53d-4040-a5b4-8e5a35e5c1bc,KernelVersion:4.4.86+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
I0521 18:33:36.989] May 21 17:23:28.397: INFO: 
I0521 18:33:36.989] Logging kubelet events for node tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:36.989] May 21 17:23:28.399: INFO: 
I0521 18:33:36.990] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:36.990] May 21 17:23:28.402: INFO: pod-sharedvolume-7329aba9-e689-42f8-b869-2bae0556da4e started at 2019-05-21 17:23:24 +0000 UTC (0+2 container statuses recorded)
I0521 18:33:36.990] May 21 17:23:28.402: INFO: 	Container busybox-main-container ready: true, restart count 0
... skipping 9 lines ...
I0521 18:33:36.994] • Failure [10.234 seconds]
I0521 18:33:36.994] [sig-storage] EmptyDir volumes
I0521 18:33:36.994] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
I0521 18:33:36.994]   pod should support shared volumes between containers [Conformance] [It]
I0521 18:33:36.994]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:36.994] 
I0521 18:33:36.994]   Unexpected error:
I0521 18:33:36.995]       <*errors.errorString | 0xc0001944b0>: {
I0521 18:33:36.995]           s: "error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-7329aba9-e689-42f8-b869-2bae0556da4e -c busybox-main-container --namespace=emptydir-5070 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc00080f790 0xc00080f7a8 0xc00080f7c0] [0xc00080f790 0xc00080f7a8 0xc00080f7c0] [0xc00080f7a0 0xc00080f7b8] [0xef22d0 0xef22d0] <nil> <nil>}:\nCommand stdout:\n\nstderr:\n\nerror:\nfork/exec : no such file or directory",
I0521 18:33:36.995]       }
I0521 18:33:36.996]       error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-7329aba9-e689-42f8-b869-2bae0556da4e -c busybox-main-container --namespace=emptydir-5070 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc00080f790 0xc00080f7a8 0xc00080f7c0] [0xc00080f790 0xc00080f7a8 0xc00080f7c0] [0xc00080f7a0 0xc00080f7b8] [0xef22d0 0xef22d0] <nil> <nil>}:
I0521 18:33:36.996]       Command stdout:
I0521 18:33:36.996]       
I0521 18:33:36.996]       stderr:
I0521 18:33:36.996]       
I0521 18:33:36.996]       error:
I0521 18:33:36.996]       fork/exec : no such file or directory
I0521 18:33:36.997]   occurred
I0521 18:33:36.997] 
I0521 18:33:36.997]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218
I0521 18:33:36.997] ------------------------------
I0521 18:33:36.997] SSSSSSSSSS
... skipping 1399 lines ...
I0521 18:33:37.247] I0521 17:27:42.284637    1332 util.go:44] Running readiness check for service "kubelet"
I0521 18:33:37.247] I0521 17:27:42.696171    1332 util.go:221] new configuration has taken effect
I0521 18:33:37.247] [It] NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
I0521 18:33:37.247]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:115
I0521 18:33:37.247] STEP: running the workload and waiting for success
I0521 18:33:37.248] I0521 17:27:43.286299    1332 server.go:182] Initial health check passed for service "kubelet"
I0521 18:33:37.248] May 21 17:27:44.719: INFO: Unexpected error occurred: pod ran to completion
I0521 18:33:37.248] May 21 17:27:44.728: INFO: Waiting for pod npb-ep-pod to disappear
I0521 18:33:37.248] May 21 17:27:44.731: INFO: Pod npb-ep-pod no longer exists
I0521 18:33:37.248] STEP: running the post test exec from the workload
I0521 18:33:37.248] I0521 17:27:54.301680    1332 server.go:222] Restarting server "kubelet" with restart command
I0521 18:33:37.248] I0521 17:27:54.315963    1332 server.go:171] Running health check for service "kubelet"
I0521 18:33:37.248] I0521 17:27:54.315991    1332 util.go:44] Running readiness check for service "kubelet"
... skipping 4 lines ...
I0521 18:33:37.249] STEP: Found 1 events.
I0521 18:33:37.249] May 21 17:27:54.928: INFO: At 2019-05-21 17:27:43 +0000 UTC - event for npb-ep-pod: {kubelet tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0} OutOfcpu: Node didn't have enough resource: cpu, requested: 15000, used: 0, capacity: 800
I0521 18:33:37.249] May 21 17:27:54.936: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0521 18:33:37.249] May 21 17:27:54.936: INFO: 
I0521 18:33:37.249] May 21 17:27:54.950: INFO: 
I0521 18:33:37.250] Logging node info for node tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:37.254] May 21 17:27:54.952: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0,UID:7c8f4774-1cb8-4987-b9fc-bab73af8c4ea,ResourceVersion:1093,Generation:0,CreationTimestamp:2019-05-21 17:23:23 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-6m5nz,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885465600 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{800 -3} {<nil>} 800m DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623321600 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 17:27:43 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 17:27:43 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 17:27:43 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 17:27:43 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.138.0.46} {Hostname tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09422cd5341503b105dfdfd50323167b,SystemUUID:09422CD5-3415-03B1-05DF-DFD50323167B,BootID:c9a0319a-c53d-4040-a5b4-8e5a35e5c1bc,KernelVersion:4.4.86+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-6m5nz,UID:b45ddf73-866d-47dd-a0f6-434a5fbb0d94,ResourceVersion:1089,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-6m5nz,UID:b45ddf73-866d-47dd-a0f6-434a5fbb0d94,ResourceVersion:1089,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0521 18:33:37.254] May 21 17:27:54.953: INFO: 
I0521 18:33:37.254] Logging kubelet events for node tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:37.254] May 21 17:27:54.956: INFO: 
I0521 18:33:37.254] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:37.254] W0521 17:27:55.022435    1332 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:33:37.254] May 21 17:27:55.174: INFO: 
... skipping 9 lines ...
I0521 18:33:37.256] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
I0521 18:33:37.256]   Run node performance testing with pre-defined workloads
I0521 18:33:37.256]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:111
I0521 18:33:37.256]     NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload [It]
I0521 18:33:37.256]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:115
I0521 18:33:37.257] 
I0521 18:33:37.257]     Unexpected error:
I0521 18:33:37.257]         <*errors.errorString | 0xc000377070>: {
I0521 18:33:37.257]             s: "pod ran to completion",
I0521 18:33:37.257]         }
I0521 18:33:37.257]         pod ran to completion
I0521 18:33:37.257]     occurred
I0521 18:33:37.257] 
I0521 18:33:37.257]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:37.258] ------------------------------
I0521 18:33:37.258] SSSSS
I0521 18:33:37.258] ------------------------------
I0521 18:33:37.258] [sig-storage] ConfigMap 
I0521 18:33:37.258]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:37.258]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:550
I0521 18:33:37.258] [BeforeEach] [sig-storage] ConfigMap
I0521 18:33:37.259]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:37.259] STEP: Creating a kubernetes client
I0521 18:33:37.259] STEP: Building a namespace api object, basename configmap
I0521 18:33:37.259] May 21 17:28:01.264: INFO: Skipping waiting for service account
I0521 18:33:37.259] [It] Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:37.259]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:550
I0521 18:33:37.260] May 21 17:28:01.266: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:37.260] STEP: Creating the pod
I0521 18:33:37.260] [AfterEach] [sig-storage] ConfigMap
I0521 18:33:37.260]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:37.260] May 21 17:33:01.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:37.260] STEP: Destroying namespace "configmap-355" for this suite.
I0521 18:33:37.260] May 21 17:33:23.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:37.260] May 21 17:33:23.353: INFO: namespace configmap-355 deletion completed in 22.058225417s
I0521 18:33:37.260] 
I0521 18:33:37.260] • [SLOW TEST:322.096 seconds]
I0521 18:33:37.261] [sig-storage] ConfigMap
I0521 18:33:37.261] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
I0521 18:33:37.261]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:37.261]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:550
I0521 18:33:37.261] ------------------------------
I0521 18:33:37.261] [sig-node] RuntimeClass 
I0521 18:33:37.261]   should reject a Pod requesting a non-existent RuntimeClass
I0521 18:33:37.261]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:46
I0521 18:33:37.261] [BeforeEach] [sig-node] RuntimeClass
... skipping 16 lines ...
I0521 18:33:37.263]   should reject a Pod requesting a non-existent RuntimeClass
I0521 18:33:37.263]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:46
I0521 18:33:37.263] ------------------------------
I0521 18:33:37.264] SSSSSS
I0521 18:33:37.264] ------------------------------
I0521 18:33:37.264] [sig-storage] Projected configMap 
I0521 18:33:37.264]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:37.264]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:491
I0521 18:33:37.264] [BeforeEach] [sig-storage] Projected configMap
I0521 18:33:37.264]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:37.264] STEP: Creating a kubernetes client
I0521 18:33:37.264] STEP: Building a namespace api object, basename projected
I0521 18:33:37.264] May 21 17:33:47.479: INFO: Skipping waiting for service account
I0521 18:33:37.265] [It] Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:37.265]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:491
I0521 18:33:37.265] May 21 17:33:47.480: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:37.265] STEP: Creating the pod
I0521 18:33:37.265] [AfterEach] [sig-storage] Projected configMap
I0521 18:33:37.265]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:37.265] May 21 17:38:47.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:37.265] STEP: Destroying namespace "projected-6030" for this suite.
I0521 18:33:37.265] May 21 17:39:09.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:37.266] May 21 17:39:09.553: INFO: namespace projected-6030 deletion completed in 22.05810018s
I0521 18:33:37.266] 
I0521 18:33:37.266] • [SLOW TEST:322.078 seconds]
I0521 18:33:37.266] [sig-storage] Projected configMap
I0521 18:33:37.266] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
I0521 18:33:37.266]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:33:37.266]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:491
I0521 18:33:37.266] ------------------------------
I0521 18:33:37.266] SSSSSSSS
I0521 18:33:37.266] ------------------------------
I0521 18:33:37.267] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads 
I0521 18:33:37.267]   TensorFlow workload
... skipping 12 lines ...
I0521 18:33:37.268] I0521 17:39:13.362639    1332 util.go:44] Running readiness check for service "kubelet"
I0521 18:33:37.268] I0521 17:39:14.364701    1332 server.go:182] Initial health check passed for service "kubelet"
I0521 18:33:37.269] I0521 17:39:14.589552    1332 util.go:221] new configuration has taken effect
I0521 18:33:37.269] [It] TensorFlow workload
I0521 18:33:37.269]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:124
I0521 18:33:37.269] STEP: running the workload and waiting for success
I0521 18:33:37.269] May 21 17:39:16.604: INFO: Unexpected error occurred: pod ran to completion
I0521 18:33:37.269] May 21 17:39:16.612: INFO: Waiting for pod tensorflow-wide-deep-pod to disappear
I0521 18:33:37.269] May 21 17:39:16.616: INFO: Pod tensorflow-wide-deep-pod no longer exists
I0521 18:33:37.269] STEP: running the post test exec from the workload
I0521 18:33:37.270] I0521 17:39:25.379917    1332 server.go:222] Restarting server "kubelet" with restart command
I0521 18:33:37.270] I0521 17:39:25.393950    1332 server.go:171] Running health check for service "kubelet"
I0521 18:33:37.270] I0521 17:39:25.393988    1332 util.go:44] Running readiness check for service "kubelet"
... skipping 5 lines ...
I0521 18:33:37.271] STEP: Found 1 events.
I0521 18:33:37.271] May 21 17:39:26.653: INFO: At 2019-05-21 17:39:14 +0000 UTC - event for tensorflow-wide-deep-pod: {kubelet tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0} OutOfcpu: Node didn't have enough resource: cpu, requested: 15000, used: 0, capacity: 800
I0521 18:33:37.271] May 21 17:39:26.655: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0521 18:33:37.271] May 21 17:39:26.655: INFO: 
I0521 18:33:37.271] May 21 17:39:26.658: INFO: 
I0521 18:33:37.271] Logging node info for node tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:37.275] May 21 17:39:26.659: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0,UID:7c8f4774-1cb8-4987-b9fc-bab73af8c4ea,ResourceVersion:1338,Generation:0,CreationTimestamp:2019-05-21 17:23:23 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gfzx2,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885465600 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623321600 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 17:39:26 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 17:39:26 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 17:39:26 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 17:39:26 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.138.0.46} {Hostname tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09422cd5341503b105dfdfd50323167b,SystemUUID:09422CD5-3415-03B1-05DF-DFD50323167B,BootID:c9a0319a-c53d-4040-a5b4-8e5a35e5c1bc,KernelVersion:4.4.86+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gfzx2,UID:16361fe0-adc1-4c7e-8fdf-418b16dffa1c,ResourceVersion:1326,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gfzx2,UID:16361fe0-adc1-4c7e-8fdf-418b16dffa1c,ResourceVersion:1326,KubeletConfigKey:kubelet,},},LastKnownGood:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-6m5nz,UID:b45ddf73-866d-47dd-a0f6-434a5fbb0d94,ResourceVersion:1089,KubeletConfigKey:kubelet,},},Error:,},},}
I0521 18:33:37.275] May 21 17:39:26.660: INFO: 
I0521 18:33:37.276] Logging kubelet events for node tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:37.276] May 21 17:39:26.661: INFO: 
I0521 18:33:37.276] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:37.277] W0521 17:39:26.667666    1332 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:33:37.277] May 21 17:39:26.685: INFO: 
... skipping 8 lines ...
I0521 18:33:37.278] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
I0521 18:33:37.278]   Run node performance testing with pre-defined workloads
I0521 18:33:37.278]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:120
I0521 18:33:37.278]     TensorFlow workload [It]
I0521 18:33:37.278]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:124
I0521 18:33:37.278] 
I0521 18:33:37.278]     Unexpected error:
I0521 18:33:37.278]         <*errors.errorString | 0xc000377070>: {
I0521 18:33:37.278]             s: "pod ran to completion",
I0521 18:33:37.278]         }
I0521 18:33:37.279]         pod ran to completion
I0521 18:33:37.279]     occurred
I0521 18:33:37.279] 
I0521 18:33:37.279]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:37.279] ------------------------------
I0521 18:33:37.279] SSSSSSSSSSSSSSSSS
I0521 18:33:37.279] ------------------------------
I0521 18:33:37.279] [sig-storage] Projected secret 
I0521 18:33:37.279]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:37.280]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:419
I0521 18:33:37.280] [BeforeEach] [sig-storage] Projected secret
I0521 18:33:37.280]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:37.280] STEP: Creating a kubernetes client
I0521 18:33:37.280] STEP: Building a namespace api object, basename projected
I0521 18:33:37.280] May 21 17:39:32.743: INFO: Skipping waiting for service account
I0521 18:33:37.281] [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:37.281]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:419
I0521 18:33:37.281] May 21 17:39:32.745: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:37.281] STEP: Creating secret with name s-test-opt-create-0f116a6f-5041-4ea5-a30b-1ab7d4c2e92e
I0521 18:33:37.282] STEP: Creating the pod
I0521 18:33:37.283] [AfterEach] [sig-storage] Projected secret
I0521 18:33:37.283]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:33:37.283] May 21 17:44:54.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:37.283] May 21 17:44:54.823: INFO: namespace projected-5023 deletion completed in 22.051420879s
I0521 18:33:37.283] 
I0521 18:33:37.284] • [SLOW TEST:322.082 seconds]
I0521 18:33:37.284] [sig-storage] Projected secret
I0521 18:33:37.284] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
I0521 18:33:37.284]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:37.284]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:419
I0521 18:33:37.284] ------------------------------
I0521 18:33:37.284] SS
I0521 18:33:37.284] ------------------------------
I0521 18:33:37.285] [sig-storage] Projected secret 
I0521 18:33:37.285]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:37.285]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:409
I0521 18:33:37.285] [BeforeEach] [sig-storage] Projected secret
I0521 18:33:37.285]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:37.285] STEP: Creating a kubernetes client
I0521 18:33:37.285] STEP: Building a namespace api object, basename projected
I0521 18:33:37.285] May 21 17:44:54.825: INFO: Skipping waiting for service account
I0521 18:33:37.285] [It] Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:37.286]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:409
I0521 18:33:37.286] May 21 17:44:54.827: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:37.286] STEP: Creating the pod
I0521 18:33:37.286] [AfterEach] [sig-storage] Projected secret
I0521 18:33:37.286]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:37.286] May 21 17:49:54.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:37.286] STEP: Destroying namespace "projected-9778" for this suite.
I0521 18:33:37.286] May 21 17:50:16.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:37.287] May 21 17:50:16.909: INFO: namespace projected-9778 deletion completed in 22.059353472s
I0521 18:33:37.287] 
I0521 18:33:37.287] • [SLOW TEST:322.086 seconds]
I0521 18:33:37.287] [sig-storage] Projected secret
I0521 18:33:37.287] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
I0521 18:33:37.287]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:37.287]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:409
I0521 18:33:37.287] ------------------------------
I0521 18:33:37.287] SSSS
I0521 18:33:37.288] ------------------------------
I0521 18:33:37.288] [sig-storage] GCP Volumes GlusterFS 
I0521 18:33:37.288]   should be mountable
... skipping 147 lines ...
I0521 18:33:37.304]   when querying /resource/metrics
I0521 18:33:37.304]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_metrics_test.go:45
I0521 18:33:37.304]     should report resource usage through the v1alpha1 resouce metrics api
I0521 18:33:37.305]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_metrics_test.go:66
I0521 18:33:37.305] ------------------------------
I0521 18:33:37.305] [sig-storage] Projected configMap 
I0521 18:33:37.305]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:37.305]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:501
I0521 18:33:37.305] [BeforeEach] [sig-storage] Projected configMap
I0521 18:33:37.305]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:37.305] STEP: Creating a kubernetes client
I0521 18:33:37.305] STEP: Building a namespace api object, basename projected
I0521 18:33:37.306] May 21 17:53:13.173: INFO: Skipping waiting for service account
I0521 18:33:37.306] [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:37.306]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:501
I0521 18:33:37.306] May 21 17:53:13.175: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:37.306] STEP: Creating configMap with name cm-test-opt-create-d0cf0161-1856-42f3-8252-076d4a24d60c
I0521 18:33:37.306] STEP: Creating the pod
I0521 18:33:37.306] [AfterEach] [sig-storage] Projected configMap
I0521 18:33:37.306]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:33:37.307] May 21 17:58:31.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:37.307] May 21 17:58:31.257: INFO: namespace projected-5853 deletion completed in 18.055962756s
I0521 18:33:37.307] 
I0521 18:33:37.307] • [SLOW TEST:318.086 seconds]
I0521 18:33:37.307] [sig-storage] Projected configMap
I0521 18:33:37.307] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
I0521 18:33:37.307]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:37.308]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:501
I0521 18:33:37.308] ------------------------------
I0521 18:33:37.308] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0521 18:33:37.308] ------------------------------
I0521 18:33:37.308] [sig-storage] GCP Volumes NFSv4 
I0521 18:33:37.309]   should be mountable for NFSv4
... skipping 71 lines ...
I0521 18:33:37.316] [JustBeforeEach] [sig-node] Node Performance Testing [Serial] [Slow]
I0521 18:33:37.316]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:65
I0521 18:33:37.316] I0521 17:58:43.412412    1332 util.go:221] new configuration has taken effect
I0521 18:33:37.317] [It] NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
I0521 18:33:37.317]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:106
I0521 18:33:37.317] STEP: running the workload and waiting for success
I0521 18:33:37.317] May 21 17:58:45.432: INFO: Unexpected error occurred: pod ran to completion
I0521 18:33:37.317] May 21 17:58:45.441: INFO: Waiting for pod npb-is-pod to disappear
I0521 18:33:37.317] May 21 17:58:45.445: INFO: Pod npb-is-pod no longer exists
I0521 18:33:37.317] STEP: running the post test exec from the workload
I0521 18:33:37.317] I0521 17:58:45.459719    1332 util.go:221] new configuration has taken effect
I0521 18:33:37.317] [AfterEach] [sig-node] Node Performance Testing [Serial] [Slow]
I0521 18:33:37.318]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:37.318] STEP: Collecting events from namespace "node-performance-testing-4210".
I0521 18:33:37.318] STEP: Found 1 events.
I0521 18:33:37.318] May 21 17:58:45.463: INFO: At 2019-05-21 17:58:43 +0000 UTC - event for npb-is-pod: {kubelet tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0} OutOfcpu: Node didn't have enough resource: cpu, requested: 16000, used: 0, capacity: 1000
I0521 18:33:37.318] May 21 17:58:45.464: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0521 18:33:37.318] May 21 17:58:45.464: INFO: 
I0521 18:33:37.318] May 21 17:58:45.468: INFO: 
I0521 18:33:37.318] Logging node info for node tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:37.322] May 21 17:58:45.469: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0,UID:7c8f4774-1cb8-4987-b9fc-bab73af8c4ea,ResourceVersion:1725,Generation:0,CreationTimestamp:2019-05-21 17:23:23 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-72dq2,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885465600 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623321600 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 17:58:29 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 17:58:29 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 17:58:29 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 17:58:29 +0000 UTC 2019-05-21 17:23:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.138.0.46} {Hostname tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09422cd5341503b105dfdfd50323167b,SystemUUID:09422CD5-3415-03B1-05DF-DFD50323167B,BootID:c9a0319a-c53d-4040-a5b4-8e5a35e5c1bc,KernelVersion:4.4.86+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gfzx2,UID:16361fe0-adc1-4c7e-8fdf-418b16dffa1c,ResourceVersion:1326,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gfzx2,UID:16361fe0-adc1-4c7e-8fdf-418b16dffa1c,ResourceVersion:1326,KubeletConfigKey:kubelet,},},LastKnownGood:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gfzx2,UID:16361fe0-adc1-4c7e-8fdf-418b16dffa1c,ResourceVersion:1326,KubeletConfigKey:kubelet,},},Error:,},},}
I0521 18:33:37.323] May 21 17:58:45.470: INFO: 
I0521 18:33:37.323] Logging kubelet events for node tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:37.323] May 21 17:58:45.471: INFO: 
I0521 18:33:37.323] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:37.323] W0521 17:58:45.475476    1332 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:33:37.323] May 21 17:58:45.523: INFO: 
... skipping 8 lines ...
I0521 18:33:37.324] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
I0521 18:33:37.324]   Run node performance testing with pre-defined workloads
I0521 18:33:37.324]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:102
I0521 18:33:37.325]     NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload [It]
I0521 18:33:37.325]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:106
I0521 18:33:37.325] 
I0521 18:33:37.325]     Unexpected error:
I0521 18:33:37.325]         <*errors.errorString | 0xc000377070>: {
I0521 18:33:37.325]             s: "pod ran to completion",
I0521 18:33:37.325]         }
I0521 18:33:37.325]         pod ran to completion
I0521 18:33:37.325]     occurred
I0521 18:33:37.325] 
... skipping 28 lines ...
I0521 18:33:37.328]   should reject a Pod requesting a RuntimeClass with an unconfigured handler
I0521 18:33:37.328]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:52
I0521 18:33:37.329] ------------------------------
I0521 18:33:37.329] SSSSS
I0521 18:33:37.329] ------------------------------
I0521 18:33:37.329] [sig-storage] ConfigMap 
I0521 18:33:37.329]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:37.329]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:560
I0521 18:33:37.329] [BeforeEach] [sig-storage] ConfigMap
I0521 18:33:37.329]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:37.329] STEP: Creating a kubernetes client
I0521 18:33:37.329] STEP: Building a namespace api object, basename configmap
I0521 18:33:37.330] May 21 17:59:15.672: INFO: Skipping waiting for service account
I0521 18:33:37.330] [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:37.330]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:560
I0521 18:33:37.330] May 21 17:59:15.674: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:37.330] STEP: Creating configMap with name cm-test-opt-create-5a76ed66-011a-44f4-8ada-4cd89738996d
I0521 18:33:37.330] STEP: Creating the pod
I0521 18:33:37.330] [AfterEach] [sig-storage] ConfigMap
I0521 18:33:37.330]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:33:37.331] May 21 18:04:37.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:37.331] May 21 18:04:37.742: INFO: namespace configmap-1840 deletion completed in 22.052398632s
I0521 18:33:37.331] 
I0521 18:33:37.331] • [SLOW TEST:322.073 seconds]
I0521 18:33:37.331] [sig-storage] ConfigMap
I0521 18:33:37.331] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
I0521 18:33:37.331]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:33:37.331]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:560
I0521 18:33:37.331] ------------------------------
I0521 18:33:37.332] SSSSS
I0521 18:33:37.332] ------------------------------
I0521 18:33:37.332] [k8s.io] NodeLease when the NodeLease feature is enabled 
I0521 18:33:37.332]   the kubelet should report node status infrequently
... skipping 44 lines ...
I0521 18:33:37.337]     the kubelet should report node status infrequently
I0521 18:33:37.337]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:87
I0521 18:33:37.337] ------------------------------
I0521 18:33:37.337] SSSSSS
I0521 18:33:37.337] ------------------------------
I0521 18:33:37.337] [sig-node] ConfigMap 
I0521 18:33:37.337]   should fail to create ConfigMap with empty key [Conformance]
I0521 18:33:37.337]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:37.338] [BeforeEach] [sig-node] ConfigMap
I0521 18:33:37.338]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:37.338] STEP: Creating a kubernetes client
I0521 18:33:37.338] STEP: Building a namespace api object, basename configmap
I0521 18:33:37.338] May 21 18:05:00.812: INFO: Skipping waiting for service account
I0521 18:33:37.338] [It] should fail to create ConfigMap with empty key [Conformance]
I0521 18:33:37.338]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:37.338] STEP: Creating configMap that has name configmap-test-emptyKey-b3df5d36-ffe5-4ec5-b39d-e0888a7634a2
I0521 18:33:37.338] [AfterEach] [sig-node] ConfigMap
I0521 18:33:37.338]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:37.339] May 21 18:05:00.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:37.339] STEP: Destroying namespace "configmap-5634" for this suite.
I0521 18:33:37.339] May 21 18:05:06.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:37.339] May 21 18:05:06.921: INFO: namespace configmap-5634 deletion completed in 6.05506539s
I0521 18:33:37.339] 
I0521 18:33:37.339] • [SLOW TEST:6.111 seconds]
I0521 18:33:37.339] [sig-node] ConfigMap
I0521 18:33:37.339] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
I0521 18:33:37.339]   should fail to create ConfigMap with empty key [Conformance]
I0521 18:33:37.339]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:37.340] ------------------------------
I0521 18:33:37.340] SSSSSSSSSSSSS
I0521 18:33:37.340] ------------------------------
I0521 18:33:37.340] [k8s.io] Density [Serial] [Slow] create a batch of pods 
I0521 18:33:37.340]   latency/resource should be within limit when create 10 pods with 0s interval
... skipping 92 lines ...
I0521 18:33:37.350]     latency/resource should be within limit when create 10 pods with 0s interval
I0521 18:33:37.350]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:100
I0521 18:33:37.351] ------------------------------
I0521 18:33:37.351] SSSSSSSSS
I0521 18:33:37.351] ------------------------------
I0521 18:33:37.351] [sig-storage] Secrets 
I0521 18:33:37.351]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:37.351]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:384
I0521 18:33:37.351] [BeforeEach] [sig-storage] Secrets
I0521 18:33:37.351]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:37.351] STEP: Creating a kubernetes client
I0521 18:33:37.351] STEP: Building a namespace api object, basename secrets
I0521 18:33:37.352] May 21 18:06:56.287: INFO: Skipping waiting for service account
I0521 18:33:37.352] [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:37.352]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:384
I0521 18:33:37.352] May 21 18:06:56.288: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:37.352] STEP: Creating secret with name s-test-opt-create-213890a4-ca7e-4a50-8a10-f8918d8d0972
I0521 18:33:37.352] STEP: Creating the pod
I0521 18:33:37.352] [AfterEach] [sig-storage] Secrets
I0521 18:33:37.352]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:33:37.353] May 21 18:12:18.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:37.353] May 21 18:12:18.369: INFO: namespace secrets-6470 deletion completed in 22.055893886s
I0521 18:33:37.353] 
I0521 18:33:37.353] • [SLOW TEST:322.085 seconds]
I0521 18:33:37.353] [sig-storage] Secrets
I0521 18:33:37.353] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
I0521 18:33:37.353]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:33:37.353]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:384
I0521 18:33:37.353] ------------------------------
I0521 18:33:37.354] SSSSSSSSSSSS
I0521 18:33:37.354] ------------------------------
I0521 18:33:37.354] [sig-storage] Secrets 
I0521 18:33:37.354]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:37.354]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:374
I0521 18:33:37.354] [BeforeEach] [sig-storage] Secrets
I0521 18:33:37.354]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:37.354] STEP: Creating a kubernetes client
I0521 18:33:37.354] STEP: Building a namespace api object, basename secrets
I0521 18:33:37.355] May 21 18:12:18.372: INFO: Skipping waiting for service account
I0521 18:33:37.355] [It] Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:37.355]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:374
I0521 18:33:37.355] May 21 18:12:18.373: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:33:37.355] STEP: Creating the pod
I0521 18:33:37.355] [AfterEach] [sig-storage] Secrets
I0521 18:33:37.355]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:37.355] May 21 18:17:18.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:37.355] STEP: Destroying namespace "secrets-1489" for this suite.
I0521 18:33:37.356] May 21 18:17:40.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:37.356] May 21 18:17:40.452: INFO: namespace secrets-1489 deletion completed in 22.053974192s
I0521 18:33:37.356] 
I0521 18:33:37.356] • [SLOW TEST:322.083 seconds]
I0521 18:33:37.356] [sig-storage] Secrets
I0521 18:33:37.356] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
I0521 18:33:37.356]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:33:37.356]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:374
I0521 18:33:37.356] ------------------------------
I0521 18:33:37.356] [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking 
I0521 18:33:37.357]   resource tracking for 10 pods per node
I0521 18:33:37.357]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:84
I0521 18:33:37.357] [BeforeEach] [sig-node] Resource-usage [Serial] [Slow]
... skipping 82 lines ...
I0521 18:33:37.366] STEP: Destroying namespace "resource-usage-7075" for this suite.
I0521 18:33:37.366] May 21 18:28:37.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:37.366] May 21 18:28:37.864: INFO: namespace resource-usage-7075 deletion completed in 6.057886023s
I0521 18:33:37.366] [AfterEach] [sig-node] Resource-usage [Serial] [Slow]
I0521 18:33:37.366]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:58
I0521 18:33:37.366] W0521 18:28:37.866005    1332 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:33:37.366] May 21 18:28:37.902: INFO: runtime operation error metrics:
I0521 18:33:37.367] node "tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0" runtime operation error rate:
I0521 18:33:37.367] operation "list_images": total - 91; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:37.367] operation "version": total - 198; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:37.367] operation "inspect_container": total - 195; error rate - 0.005128; timeout rate - 0.000000
I0521 18:33:37.367] operation "start_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:37.367] operation "info": total - 0; error rate - NaN; timeout rate - NaN
I0521 18:33:37.367] operation "inspect_image": total - 72; error rate - 0.152778; timeout rate - 0.000000
I0521 18:33:37.367] operation "stop_container": total - 42; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:37.367] operation "create_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:37.368] operation "list_containers": total - 2497; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:37.368] operation "remove_container": total - 11; error rate - 0.000000; timeout rate - 0.000000
I0521 18:33:37.368] 
I0521 18:33:37.368] 
I0521 18:33:37.368] 
I0521 18:33:37.368] • [SLOW TEST:657.450 seconds]
I0521 18:33:37.368] [sig-node] Resource-usage [Serial] [Slow]
I0521 18:33:37.368] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
... skipping 2 lines ...
I0521 18:33:37.368]     resource tracking for 10 pods per node
I0521 18:33:37.368]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:84
I0521 18:33:37.369] ------------------------------
I0521 18:33:37.369] SSSSSSSSSSSSSSSSSSSSSS
I0521 18:33:37.369] ------------------------------
I0521 18:33:37.369] [sig-api-machinery] Secrets 
I0521 18:33:37.369]   should fail to create secret due to empty secret key [Conformance]
I0521 18:33:37.369]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:37.369] [BeforeEach] [sig-api-machinery] Secrets
I0521 18:33:37.369]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:33:37.369] STEP: Creating a kubernetes client
I0521 18:33:37.370] STEP: Building a namespace api object, basename secrets
I0521 18:33:37.370] May 21 18:28:37.907: INFO: Skipping waiting for service account
I0521 18:33:37.370] [It] should fail to create secret due to empty secret key [Conformance]
I0521 18:33:37.370]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:37.370] STEP: Creating projection with secret that has name secret-emptykey-test-3b45933d-a791-4ec0-a697-a8e55ddc48ff
I0521 18:33:37.370] [AfterEach] [sig-api-machinery] Secrets
I0521 18:33:37.370]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:33:37.370] May 21 18:28:37.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:33:37.370] STEP: Destroying namespace "secrets-5332" for this suite.
I0521 18:33:37.371] May 21 18:28:43.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:33:37.371] May 21 18:28:43.964: INFO: namespace secrets-5332 deletion completed in 6.054695449s
I0521 18:33:37.371] 
I0521 18:33:37.371] • [SLOW TEST:6.061 seconds]
I0521 18:33:37.371] [sig-api-machinery] Secrets
I0521 18:33:37.371] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
I0521 18:33:37.371]   should fail to create secret due to empty secret key [Conformance]
I0521 18:33:37.371]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:33:37.371] ------------------------------
I0521 18:33:37.371] SSSS
I0521 18:33:37.371] ------------------------------
I0521 18:33:37.372] [k8s.io] Probing container 
I0521 18:33:37.372]   should be restarted with a local redirect http liveness probe
... skipping 92 lines ...
I0521 18:33:37.382]   should *not* be restarted with a non-local redirect http liveness probe
I0521 18:33:37.382]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:246
I0521 18:33:37.382] ------------------------------
I0521 18:33:37.382] SSSSI0521 18:33:24.646141    1332 e2e_node_suite_test.go:186] Stopping node services...
I0521 18:33:37.382] I0521 18:33:24.646167    1332 server.go:257] Kill server "services"
I0521 18:33:37.383] I0521 18:33:24.646177    1332 server.go:294] Killing process 1770 (services) with -TERM
I0521 18:33:37.383] E0521 18:33:24.739746    1332 services.go:89] Failed to stop services: error stopping "services": waitid: no child processes
I0521 18:33:37.383] I0521 18:33:24.739780    1332 server.go:257] Kill server "kubelet"
I0521 18:33:37.383] I0521 18:33:24.750727    1332 services.go:148] Fetching log files...
I0521 18:33:37.383] I0521 18:33:24.750818    1332 services.go:157] Get log file "kern.log" with journalctl command [-k].
I0521 18:33:37.383] I0521 18:33:24.848538    1332 services.go:157] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0521 18:33:37.383] I0521 18:33:25.485134    1332 services.go:157] Get log file "docker.log" with journalctl command [-u docker].
I0521 18:33:37.383] I0521 18:33:25.500967    1332 services.go:157] Get log file "kubelet.log" with journalctl command [-u kubelet-20190521T172155.service].
I0521 18:33:37.384] I0521 18:33:27.639126    1332 e2e_node_suite_test.go:191] Tests Finished
I0521 18:33:37.384] 
I0521 18:33:37.384] 
I0521 18:33:37.384] Summarizing 4 Failures:
I0521 18:33:37.384] 
I0521 18:33:37.384] [Fail] [sig-storage] EmptyDir volumes [It] pod should support shared volumes between containers [Conformance] 
I0521 18:33:37.384] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218
I0521 18:33:37.384] 
I0521 18:33:37.384] [Fail] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads [It] NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload 
I0521 18:33:37.384] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:37.385] 
I0521 18:33:37.385] [Fail] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads [It] TensorFlow workload 
I0521 18:33:37.385] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:37.385] 
I0521 18:33:37.385] [Fail] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads [It] NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload 
I0521 18:33:37.385] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:33:37.385] 
I0521 18:33:37.385] Ran 25 of 303 Specs in 4274.599 seconds
I0521 18:33:37.386] FAIL! -- 21 Passed | 4 Failed | 0 Pending | 278 Skipped
I0521 18:33:37.386] --- FAIL: TestE2eNode (4274.63s)
I0521 18:33:37.386] FAIL
I0521 18:33:37.386] 
I0521 18:33:37.386] Ginkgo ran 1 suite in 1h11m17.243681578s
I0521 18:33:37.386] Test Suite Failed
I0521 18:33:37.386] 
I0521 18:33:37.386] Failure Finished Test Suite on Host tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0
I0521 18:33:37.387] command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.233.190.13 -- sudo sh -c 'cd /tmp/node-e2e-20190521T172155 && timeout -k 30s 18000.000000s ./ginkgo --nodes=1 --skip="\[Flaky\]|\[NodeConformance\]|\[NodeFeature:.+\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]|\[Legacy:.+\]|\[Benchmark\]" ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --logtostderr --v 4 --node-name=tmp-node-e2e-eceb20b6-cos-stable-63-10032-71-0 --report-dir=/tmp/node-e2e-20190521T172155/results --report-prefix=cos-stable1 --image-description="cos-stable-63-10032-71-0" --kubelet-flags=--experimental-mounter-path=/tmp/node-e2e-20190521T172155/mounter --kubelet-flags=--experimental-kernel-memcg-notification=true --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"'] failed with error: exit status 1
I0521 18:33:37.387] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0521 18:33:37.387] <                              FINISH TEST                               <
I0521 18:33:37.387] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0521 18:33:37.387] 
W0521 18:34:53.263] I0521 18:34:53.262693    4243 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0521 18:34:54.734] I0521 18:34:54.734032    4243 remote.go:202] Got the system logs from journald; copying it back...
W0521 18:34:56.119] I0521 18:34:56.119530    4243 remote.go:122] Copying test artifacts from "tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911"
W0521 18:35:01.652] I0521 18:35:01.652565    4243 run_remote.go:718] Deleting instance "tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911"
I0521 18:35:02.057] 
I0521 18:35:02.057] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0521 18:35:02.058] >                              START TEST                                >
... skipping 98 lines ...
I0521 18:35:02.078] 
I0521 18:35:02.078]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:214
I0521 18:35:02.078] ------------------------------
I0521 18:35:02.078] SSSSSSSSSSSSS
I0521 18:35:02.078] ------------------------------
I0521 18:35:02.079] [sig-api-machinery] Secrets 
I0521 18:35:02.079]   should fail to create secret due to empty secret key [Conformance]
I0521 18:35:02.079]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:35:02.079] [BeforeEach] [sig-api-machinery] Secrets
I0521 18:35:02.079]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:35:02.080] STEP: Creating a kubernetes client
I0521 18:35:02.080] STEP: Building a namespace api object, basename secrets
I0521 18:35:02.080] May 21 17:23:35.777: INFO: Skipping waiting for service account
I0521 18:35:02.080] [It] should fail to create secret due to empty secret key [Conformance]
I0521 18:35:02.080]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:35:02.081] STEP: Creating projection with secret that has name secret-emptykey-test-c4571b97-4d4a-4db4-b2f9-8cc3d6203249
I0521 18:35:02.081] [AfterEach] [sig-api-machinery] Secrets
I0521 18:35:02.081]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:35:02.081] May 21 17:23:35.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:35:02.081] STEP: Destroying namespace "secrets-8429" for this suite.
I0521 18:35:02.082] May 21 17:23:41.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:35:02.082] May 21 17:23:41.825: INFO: namespace secrets-8429 deletion completed in 6.045165195s
I0521 18:35:02.082] 
I0521 18:35:02.082] • [SLOW TEST:6.052 seconds]
I0521 18:35:02.082] [sig-api-machinery] Secrets
I0521 18:35:02.083] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
I0521 18:35:02.083]   should fail to create secret due to empty secret key [Conformance]
I0521 18:35:02.083]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:35:02.083] ------------------------------
I0521 18:35:02.083] SS
I0521 18:35:02.083] ------------------------------
I0521 18:35:02.084] [k8s.io] Probing container 
I0521 18:35:02.084]   should be restarted with a local redirect http liveness probe
... skipping 26 lines ...
I0521 18:35:02.090]   should be restarted with a local redirect http liveness probe
I0521 18:35:02.090]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:231
I0521 18:35:02.090] ------------------------------
I0521 18:35:02.090] SSSSSSSSSS
I0521 18:35:02.090] ------------------------------
I0521 18:35:02.091] [sig-storage] ConfigMap 
I0521 18:35:02.091]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:35:02.091]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:560
I0521 18:35:02.091] [BeforeEach] [sig-storage] ConfigMap
I0521 18:35:02.091]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:35:02.092] STEP: Creating a kubernetes client
I0521 18:35:02.092] STEP: Building a namespace api object, basename configmap
I0521 18:35:02.092] May 21 17:24:16.056: INFO: Skipping waiting for service account
I0521 18:35:02.092] [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:35:02.092]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:560
I0521 18:35:02.093] May 21 17:24:16.057: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:35:02.093] STEP: Creating configMap with name cm-test-opt-create-4433dde5-76be-4a30-b276-e7f92f337f5d
I0521 18:35:02.093] STEP: Creating the pod
I0521 18:35:02.093] [AfterEach] [sig-storage] ConfigMap
I0521 18:35:02.093]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:35:02.094] May 21 17:29:38.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:35:02.094] May 21 17:29:38.128: INFO: namespace configmap-2287 deletion completed in 22.050198488s
I0521 18:35:02.094] 
I0521 18:35:02.095] • [SLOW TEST:322.075 seconds]
I0521 18:35:02.095] [sig-storage] ConfigMap
I0521 18:35:02.095] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
I0521 18:35:02.095]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:35:02.095]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:560
I0521 18:35:02.096] ------------------------------
I0521 18:35:02.096] SS
I0521 18:35:02.096] ------------------------------
I0521 18:35:02.096] [sig-storage] Projected secret 
I0521 18:35:02.096]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:35:02.097]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:419
I0521 18:35:02.097] [BeforeEach] [sig-storage] Projected secret
I0521 18:35:02.097]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:35:02.097] STEP: Creating a kubernetes client
I0521 18:35:02.097] STEP: Building a namespace api object, basename projected
I0521 18:35:02.097] May 21 17:29:38.131: INFO: Skipping waiting for service account
I0521 18:35:02.097] [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:35:02.097]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:419
I0521 18:35:02.098] May 21 17:29:38.132: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:35:02.098] STEP: Creating secret with name s-test-opt-create-ef064a2f-5d1c-4d1c-a158-fd5bf8d0a8dd
I0521 18:35:02.098] STEP: Creating the pod
I0521 18:35:02.098] [AfterEach] [sig-storage] Projected secret
I0521 18:35:02.098]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:35:02.098] May 21 17:35:00.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:35:02.099] May 21 17:35:00.195: INFO: namespace projected-3727 deletion completed in 22.047440846s
I0521 18:35:02.099] 
I0521 18:35:02.099] • [SLOW TEST:322.067 seconds]
I0521 18:35:02.099] [sig-storage] Projected secret
I0521 18:35:02.099] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
I0521 18:35:02.099]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:35:02.100]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:419
I0521 18:35:02.100] ------------------------------
I0521 18:35:02.100] SSSSSSS
I0521 18:35:02.100] ------------------------------
I0521 18:35:02.100] [sig-storage] Projected configMap 
I0521 18:35:02.100]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:35:02.101]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:491
I0521 18:35:02.101] [BeforeEach] [sig-storage] Projected configMap
I0521 18:35:02.101]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:35:02.101] STEP: Creating a kubernetes client
I0521 18:35:02.101] STEP: Building a namespace api object, basename projected
I0521 18:35:02.101] May 21 17:35:00.198: INFO: Skipping waiting for service account
I0521 18:35:02.102] [It] Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:35:02.102]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:491
I0521 18:35:02.102] May 21 17:35:00.199: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:35:02.102] STEP: Creating the pod
I0521 18:35:02.102] [AfterEach] [sig-storage] Projected configMap
I0521 18:35:02.102]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:35:02.103] May 21 17:40:00.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:35:02.103] STEP: Destroying namespace "projected-2496" for this suite.
I0521 18:35:02.103] May 21 17:40:22.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:35:02.103] May 21 17:40:22.271: INFO: namespace projected-2496 deletion completed in 22.053115355s
I0521 18:35:02.103] 
I0521 18:35:02.104] • [SLOW TEST:322.076 seconds]
I0521 18:35:02.104] [sig-storage] Projected configMap
I0521 18:35:02.104] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
I0521 18:35:02.104]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:35:02.104]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:491
I0521 18:35:02.104] ------------------------------
I0521 18:35:02.105] [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking 
I0521 18:35:02.105]   resource tracking for 10 pods per node
I0521 18:35:02.105]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:84
I0521 18:35:02.105] [BeforeEach] [sig-node] Resource-usage [Serial] [Slow]
... skipping 82 lines ...
I0521 18:35:02.119] STEP: Destroying namespace "resource-usage-514" for this suite.
I0521 18:35:02.119] May 21 17:51:21.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:35:02.119] May 21 17:51:21.626: INFO: namespace resource-usage-514 deletion completed in 6.052397381s
I0521 18:35:02.120] [AfterEach] [sig-node] Resource-usage [Serial] [Slow]
I0521 18:35:02.120]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:58
I0521 18:35:02.120] W0521 17:51:21.628152    1331 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:35:02.120] May 21 17:51:21.646: INFO: runtime operation error metrics:
I0521 18:35:02.120] node "tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911" runtime operation error rate:
I0521 18:35:02.120] operation "inspect_container": total - 211; error rate - 0.000000; timeout rate - 0.000000
I0521 18:35:02.121] operation "list_containers": total - 2519; error rate - 0.000000; timeout rate - 0.000000
I0521 18:35:02.121] operation "info": total - 0; error rate - NaN; timeout rate - NaN
I0521 18:35:02.121] operation "remove_container": total - 11; error rate - 0.000000; timeout rate - 0.000000
I0521 18:35:02.121] operation "version": total - 198; error rate - 0.000000; timeout rate - 0.000000
I0521 18:35:02.121] operation "start_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
I0521 18:35:02.121] operation "stop_container": total - 50; error rate - 0.000000; timeout rate - 0.000000
I0521 18:35:02.122] operation "inspect_image": total - 75; error rate - 0.146667; timeout rate - 0.000000
I0521 18:35:02.122] operation "list_images": total - 92; error rate - 0.000000; timeout rate - 0.000000
I0521 18:35:02.122] operation "create_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
I0521 18:35:02.122] 
I0521 18:35:02.122] 
I0521 18:35:02.122] 
I0521 18:35:02.122] • [SLOW TEST:659.375 seconds]
I0521 18:35:02.122] [sig-node] Resource-usage [Serial] [Slow]
I0521 18:35:02.123] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
... skipping 27 lines ...
I0521 18:35:02.127]   should reject a Pod requesting a non-existent RuntimeClass
I0521 18:35:02.127]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtimeclass.go:46
I0521 18:35:02.127] ------------------------------
I0521 18:35:02.128] SSSSSSSSSSSSSSSSSSSS
I0521 18:35:02.128] ------------------------------
I0521 18:35:02.128] [sig-storage] Projected configMap 
I0521 18:35:02.128]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:35:02.128]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:501
I0521 18:35:02.128] [BeforeEach] [sig-storage] Projected configMap
I0521 18:35:02.128]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:35:02.129] STEP: Creating a kubernetes client
I0521 18:35:02.129] STEP: Building a namespace api object, basename projected
I0521 18:35:02.129] May 21 17:51:45.725: INFO: Skipping waiting for service account
I0521 18:35:02.129] [It] Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:35:02.129]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:501
I0521 18:35:02.129] May 21 17:51:45.727: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:35:02.130] STEP: Creating configMap with name cm-test-opt-create-79629d24-2ca8-4ea1-89ae-b3756bdd6cb0
I0521 18:35:02.130] STEP: Creating the pod
I0521 18:35:02.130] [AfterEach] [sig-storage] Projected configMap
I0521 18:35:02.130]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:35:02.131] May 21 17:57:07.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:35:02.131] May 21 17:57:07.795: INFO: namespace projected-1765 deletion completed in 22.05193581s
I0521 18:35:02.131] 
I0521 18:35:02.131] • [SLOW TEST:322.073 seconds]
I0521 18:35:02.131] [sig-storage] Projected configMap
I0521 18:35:02.131] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
I0521 18:35:02.131]   Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
I0521 18:35:02.132]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:501
I0521 18:35:02.132] ------------------------------
I0521 18:35:02.132] [sig-storage] GCP Volumes NFSv4 
I0521 18:35:02.132]   should be mountable for NFSv4
I0521 18:35:02.132]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:76
I0521 18:35:02.132] [BeforeEach] [sig-storage] GCP Volumes
... skipping 52 lines ...
I0521 18:35:02.141] May 21 17:57:25.931: INFO: Condition Ready of node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911 is false, but Node is tainted by NodeController with []. Failure
I0521 18:35:02.141] May 21 17:57:26.933: INFO: Condition Ready of node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911 is false, but Node is tainted by NodeController with []. Failure
I0521 18:35:02.141] May 21 17:57:27.936: INFO: Condition Ready of node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911 is false, but Node is tainted by NodeController with []. Failure
I0521 18:35:02.141] [It] TensorFlow workload
I0521 18:35:02.142]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:124
I0521 18:35:02.142] STEP: running the workload and waiting for success
I0521 18:35:02.142] May 21 17:57:30.950: INFO: Unexpected error occurred: pod ran to completion
I0521 18:35:02.142] May 21 17:57:30.958: INFO: Waiting for pod tensorflow-wide-deep-pod to disappear
I0521 18:35:02.142] May 21 17:57:30.961: INFO: Pod tensorflow-wide-deep-pod no longer exists
I0521 18:35:02.142] STEP: running the post test exec from the workload
I0521 18:35:02.142] I0521 17:57:40.940693    1331 server.go:222] Restarting server "kubelet" with restart command
I0521 18:35:02.143] I0521 17:57:40.956222    1331 server.go:171] Running health check for service "kubelet"
I0521 18:35:02.143] I0521 17:57:40.956249    1331 util.go:44] Running readiness check for service "kubelet"
... skipping 6 lines ...
I0521 18:35:02.144] STEP: Found 1 events.
I0521 18:35:02.144] May 21 17:57:45.999: INFO: At 2019-05-21 17:57:28 +0000 UTC - event for tensorflow-wide-deep-pod: {kubelet tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911} OutOfcpu: Node didn't have enough resource: cpu, requested: 15000, used: 0, capacity: 800
I0521 18:35:02.145] May 21 17:57:46.000: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0521 18:35:02.145] May 21 17:57:46.000: INFO: 
I0521 18:35:02.145] May 21 17:57:46.003: INFO: 
I0521 18:35:02.145] Logging node info for node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.149] May 21 17:57:46.004: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911,UID:27225a0a-f8a1-423a-87fd-b49e5c8acde3,ResourceVersion:845,Generation:0,CreationTimestamp:2019-05-21 17:23:28 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-8b4br,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18327040000 0} {<nil>} 17897500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3875430400 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16494335973 0} {<nil>} 16494335973 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3613286400 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 17:57:41 +0000 UTC 2019-05-21 17:23:25 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 17:57:41 +0000 UTC 2019-05-21 17:23:25 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 17:57:41 +0000 UTC 2019-05-21 17:23:25 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 17:57:41 +0000 UTC 2019-05-21 17:57:28 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.138.0.43} {Hostname tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:710cf1835bf43efaa84b7d55bc700ca9,SystemUUID:710CF183-5BF4-3EFA-A84B-7D55BC700CA9,BootID:554bb999-e92f-41cb-8bd5-cec7d565e71c,KernelVersion:4.14.69-coreos,OSImage:Container Linux by CoreOS 1883.1.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.1,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-8b4br,UID:6fbd442c-2e71-4142-b1a4-1ad9708d4477,ResourceVersion:833,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-8b4br,UID:6fbd442c-2e71-4142-b1a4-1ad9708d4477,ResourceVersion:833,KubeletConfigKey:kubelet,},},LastKnownGood:nil,Error:,},},}
I0521 18:35:02.149] May 21 17:57:46.004: INFO: 
I0521 18:35:02.150] Logging kubelet events for node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.150] May 21 17:57:46.006: INFO: 
I0521 18:35:02.150] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.150] W0521 17:57:46.011300    1331 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:35:02.150] May 21 17:57:46.022: INFO: 
... skipping 8 lines ...
I0521 18:35:02.151] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
I0521 18:35:02.152]   Run node performance testing with pre-defined workloads
I0521 18:35:02.152]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:120
I0521 18:35:02.152]     TensorFlow workload [It]
I0521 18:35:02.152]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:124
I0521 18:35:02.152] 
I0521 18:35:02.152]     Unexpected error:
I0521 18:35:02.152]         <*errors.errorString | 0xc000548e80>: {
I0521 18:35:02.152]             s: "pod ran to completion",
I0521 18:35:02.153]         }
I0521 18:35:02.153]         pod ran to completion
I0521 18:35:02.153]     occurred
I0521 18:35:02.153] 
I0521 18:35:02.153]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:35:02.153] ------------------------------
I0521 18:35:02.153] SSSSSSSSSSSSSSSSSSSSSSSS
I0521 18:35:02.154] ------------------------------
I0521 18:35:02.154] [sig-storage] Secrets 
I0521 18:35:02.154]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:35:02.154]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:384
I0521 18:35:02.154] [BeforeEach] [sig-storage] Secrets
I0521 18:35:02.154]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:35:02.154] STEP: Creating a kubernetes client
I0521 18:35:02.155] STEP: Building a namespace api object, basename secrets
I0521 18:35:02.155] May 21 17:57:52.080: INFO: Skipping waiting for service account
I0521 18:35:02.155] [It] Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:35:02.155]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:384
I0521 18:35:02.155] May 21 17:57:52.081: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:35:02.155] STEP: Creating secret with name s-test-opt-create-5ad20474-30a7-463c-bf23-1b36a98bff7f
I0521 18:35:02.155] STEP: Creating the pod
I0521 18:35:02.156] [AfterEach] [sig-storage] Secrets
I0521 18:35:02.156]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 2 lines ...
I0521 18:35:02.156] May 21 18:03:14.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:35:02.156] May 21 18:03:14.153: INFO: namespace secrets-8407 deletion completed in 22.047288341s
I0521 18:35:02.157] 
I0521 18:35:02.157] • [SLOW TEST:322.076 seconds]
I0521 18:35:02.157] [sig-storage] Secrets
I0521 18:35:02.157] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
I0521 18:35:02.157]   Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
I0521 18:35:02.157]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:384
I0521 18:35:02.157] ------------------------------
I0521 18:35:02.158] [sig-storage] GCP Volumes NFSv3 
I0521 18:35:02.158]   should be mountable for NFSv3
I0521 18:35:02.158]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:100
I0521 18:35:02.158] [BeforeEach] [sig-storage] GCP Volumes
... skipping 23 lines ...
I0521 18:35:02.162] 
I0521 18:35:02.162]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:66
I0521 18:35:02.163] ------------------------------
I0521 18:35:02.163] SSSSSSSSSSS
I0521 18:35:02.163] ------------------------------
I0521 18:35:02.163] [sig-storage] Projected secret 
I0521 18:35:02.163]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:35:02.163]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:409
I0521 18:35:02.164] [BeforeEach] [sig-storage] Projected secret
I0521 18:35:02.164]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:35:02.164] STEP: Creating a kubernetes client
I0521 18:35:02.164] STEP: Building a namespace api object, basename projected
I0521 18:35:02.164] May 21 18:03:20.211: INFO: Skipping waiting for service account
I0521 18:35:02.165] [It] Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:35:02.165]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:409
I0521 18:35:02.165] May 21 18:03:20.213: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:35:02.165] STEP: Creating the pod
I0521 18:35:02.165] [AfterEach] [sig-storage] Projected secret
I0521 18:35:02.165]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:35:02.166] May 21 18:08:20.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:35:02.166] STEP: Destroying namespace "projected-8217" for this suite.
I0521 18:35:02.166] May 21 18:08:36.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:35:02.166] May 21 18:08:36.280: INFO: namespace projected-8217 deletion completed in 16.051241422s
I0521 18:35:02.166] 
I0521 18:35:02.166] • [SLOW TEST:316.072 seconds]
I0521 18:35:02.167] [sig-storage] Projected secret
I0521 18:35:02.167] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
I0521 18:35:02.167]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:35:02.167]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:409
I0521 18:35:02.167] ------------------------------
I0521 18:35:02.168] SSSSSSSSSSSSSSSSSSSSSSSSS
I0521 18:35:02.168] ------------------------------
I0521 18:35:02.168] [k8s.io] ResourceMetricsAPI when querying /resource/metrics 
I0521 18:35:02.168]   should report resource usage through the v1alpha1 resouce metrics api
... skipping 1589 lines ...
I0521 18:35:02.617]     latency/resource should be within limit when create 10 pods with 50 background pods
I0521 18:35:02.617]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/density_test.go:244
I0521 18:35:02.618] ------------------------------
I0521 18:35:02.618] SSSSSSSS
I0521 18:35:02.618] ------------------------------
I0521 18:35:02.618] [sig-storage] Secrets 
I0521 18:35:02.618]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:35:02.618]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:374
I0521 18:35:02.618] [BeforeEach] [sig-storage] Secrets
I0521 18:35:02.619]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:35:02.619] STEP: Creating a kubernetes client
I0521 18:35:02.619] STEP: Building a namespace api object, basename secrets
I0521 18:35:02.619] May 21 18:15:20.114: INFO: Skipping waiting for service account
I0521 18:35:02.619] [It] Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:35:02.620]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:374
I0521 18:35:02.620] May 21 18:15:20.116: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:35:02.620] STEP: Creating the pod
I0521 18:35:02.620] [AfterEach] [sig-storage] Secrets
I0521 18:35:02.620]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:35:02.620] May 21 18:20:20.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:35:02.621] STEP: Destroying namespace "secrets-559" for this suite.
I0521 18:35:02.621] May 21 18:20:42.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:35:02.621] May 21 18:20:42.179: INFO: namespace secrets-559 deletion completed in 22.044761801s
I0521 18:35:02.621] 
I0521 18:35:02.621] • [SLOW TEST:322.068 seconds]
I0521 18:35:02.621] [sig-storage] Secrets
I0521 18:35:02.621] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
I0521 18:35:02.622]   Should fail non-optional pod creation due to secret object does not exist [Slow]
I0521 18:35:02.622]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:374
I0521 18:35:02.622] ------------------------------
I0521 18:35:02.622] SSS
I0521 18:35:02.622] ------------------------------
I0521 18:35:02.623] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads 
I0521 18:35:02.623]   NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
... skipping 8 lines ...
I0521 18:35:02.624] [JustBeforeEach] [sig-node] Node Performance Testing [Serial] [Slow]
I0521 18:35:02.624]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:65
I0521 18:35:02.625] I0521 18:20:42.197232    1331 util.go:221] new configuration has taken effect
I0521 18:35:02.625] [It] NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
I0521 18:35:02.625]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:106
I0521 18:35:02.625] STEP: running the workload and waiting for success
I0521 18:35:02.625] May 21 18:20:44.211: INFO: Unexpected error occurred: pod ran to completion
I0521 18:35:02.625] May 21 18:20:44.222: INFO: Waiting for pod npb-is-pod to disappear
I0521 18:35:02.626] May 21 18:20:44.226: INFO: Pod npb-is-pod no longer exists
I0521 18:35:02.626] STEP: running the post test exec from the workload
I0521 18:35:02.626] I0521 18:20:44.245833    1331 util.go:221] new configuration has taken effect
I0521 18:35:02.626] [AfterEach] [sig-node] Node Performance Testing [Serial] [Slow]
I0521 18:35:02.626]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:35:02.626] STEP: Collecting events from namespace "node-performance-testing-1339".
I0521 18:35:02.627] STEP: Found 1 events.
I0521 18:35:02.627] May 21 18:20:44.249: INFO: At 2019-05-21 18:20:42 +0000 UTC - event for npb-is-pod: {kubelet tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911} OutOfcpu: Node didn't have enough resource: cpu, requested: 16000, used: 0, capacity: 1000
I0521 18:35:02.627] May 21 18:20:44.250: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0521 18:35:02.627] May 21 18:20:44.250: INFO: 
I0521 18:35:02.627] May 21 18:20:44.253: INFO: 
I0521 18:35:02.627] Logging node info for node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.634] May 21 18:20:44.254: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911,UID:27225a0a-f8a1-423a-87fd-b49e5c8acde3,ResourceVersion:2197,Generation:0,CreationTimestamp:2019-05-21 17:23:28 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-gwdv5,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18327040000 0} {<nil>} 17897500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3875430400 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16494335973 0} {<nil>} 16494335973 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3613286400 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 18:20:43 +0000 UTC 2019-05-21 17:23:25 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 18:20:43 +0000 UTC 2019-05-21 17:23:25 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 18:20:43 +0000 UTC 2019-05-21 17:23:25 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 18:20:43 +0000 UTC 2019-05-21 17:57:28 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.138.0.43} {Hostname tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:710cf1835bf43efaa84b7d55bc700ca9,SystemUUID:710CF183-5BF4-3EFA-A84B-7D55BC700CA9,BootID:554bb999-e92f-41cb-8bd5-cec7d565e71c,KernelVersion:4.14.69-coreos,OSImage:Container Linux by CoreOS 1883.1.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.1,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-8b4br,UID:6fbd442c-2e71-4142-b1a4-1ad9708d4477,ResourceVersion:833,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-8b4br,UID:6fbd442c-2e71-4142-b1a4-1ad9708d4477,ResourceVersion:833,KubeletConfigKey:kubelet,},},LastKnownGood:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-8b4br,UID:6fbd442c-2e71-4142-b1a4-1ad9708d4477,ResourceVersion:833,KubeletConfigKey:kubelet,},},Error:,},},}
I0521 18:35:02.634] May 21 18:20:44.255: INFO: 
I0521 18:35:02.634] Logging kubelet events for node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.635] May 21 18:20:44.256: INFO: 
I0521 18:35:02.635] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.635] W0521 18:20:44.259513    1331 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:35:02.635] May 21 18:20:44.277: INFO: 
... skipping 12 lines ...
I0521 18:35:02.638] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
I0521 18:35:02.638]   Run node performance testing with pre-defined workloads
I0521 18:35:02.638]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:102
I0521 18:35:02.638]     NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload [It]
I0521 18:35:02.638]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:106
I0521 18:35:02.638] 
I0521 18:35:02.639]     Unexpected error:
I0521 18:35:02.639]         <*errors.errorString | 0xc000548e80>: {
I0521 18:35:02.639]             s: "pod ran to completion",
I0521 18:35:02.639]         }
I0521 18:35:02.639]         pod ran to completion
I0521 18:35:02.639]     occurred
I0521 18:35:02.639] 
... skipping 150 lines ...
I0521 18:35:02.683] I0521 18:22:51.358808    1331 util.go:44] Running readiness check for service "kubelet"
I0521 18:35:02.684] I0521 18:22:52.360438    1331 server.go:182] Initial health check passed for service "kubelet"
I0521 18:35:02.684] I0521 18:22:55.729377    1331 util.go:221] new configuration has taken effect
I0521 18:35:02.684] [It] NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
I0521 18:35:02.684]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:115
I0521 18:35:02.684] STEP: running the workload and waiting for success
I0521 18:35:02.685] May 21 18:22:57.741: INFO: Unexpected error occurred: pod ran to completion
I0521 18:35:02.685] May 21 18:22:57.748: INFO: Waiting for pod npb-ep-pod to disappear
I0521 18:35:02.685] May 21 18:22:57.752: INFO: Pod npb-ep-pod no longer exists
I0521 18:35:02.685] STEP: running the post test exec from the workload
I0521 18:35:02.686] E0521 18:23:02.771626    1331 util.go:268] /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Content-Length:[158] Content-Type:[text/plain; charset=utf-8] Date:[Tue, 21 May 2019 18:23:02 GMT]] Body:0xc000785a00 ContentLength:158 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000a38200 TLS:<nil>}
I0521 18:35:02.686] I0521 18:23:03.376886    1331 server.go:222] Restarting server "kubelet" with restart command
I0521 18:35:02.686] I0521 18:23:03.391147    1331 server.go:171] Running health check for service "kubelet"
... skipping 6 lines ...
I0521 18:35:02.687] STEP: Found 1 events.
I0521 18:35:02.687] May 21 18:23:07.787: INFO: At 2019-05-21 18:22:55 +0000 UTC - event for npb-ep-pod: {kubelet tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911} OutOfcpu: Node didn't have enough resource: cpu, requested: 15000, used: 0, capacity: 800
I0521 18:35:02.688] May 21 18:23:07.788: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
I0521 18:35:02.688] May 21 18:23:07.788: INFO: 
I0521 18:35:02.688] May 21 18:23:07.791: INFO: 
I0521 18:35:02.688] Logging node info for node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.704] May 21 18:23:07.793: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911,UID:27225a0a-f8a1-423a-87fd-b49e5c8acde3,ResourceVersion:2437,Generation:0,CreationTimestamp:2019-05-21 17:23:28 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-ltpvp,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18327040000 0} {<nil>} 17897500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3875430400 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16494335973 0} {<nil>} 16494335973 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3613286400 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 18:23:03 +0000 UTC 2019-05-21 17:23:25 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 18:23:03 +0000 UTC 2019-05-21 17:23:25 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 18:23:03 +0000 UTC 2019-05-21 17:23:25 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 18:23:03 +0000 UTC 2019-05-21 17:57:28 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.138.0.43} {Hostname tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:710cf1835bf43efaa84b7d55bc700ca9,SystemUUID:710CF183-5BF4-3EFA-A84B-7D55BC700CA9,BootID:554bb999-e92f-41cb-8bd5-cec7d565e71c,KernelVersion:4.14.69-coreos,OSImage:Container Linux by CoreOS 1883.1.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.1,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-ltpvp,UID:3b6c48e6-61c5-4639-8bd0-dd4c997cf719,ResourceVersion:2425,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-ltpvp,UID:3b6c48e6-61c5-4639-8bd0-dd4c997cf719,ResourceVersion:2425,KubeletConfigKey:kubelet,},},LastKnownGood:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-8b4br,UID:6fbd442c-2e71-4142-b1a4-1ad9708d4477,ResourceVersion:833,KubeletConfigKey:kubelet,},},Error:,},},}
I0521 18:35:02.707] May 21 18:23:07.793: INFO: 
I0521 18:35:02.707] Logging kubelet events for node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.707] May 21 18:23:07.794: INFO: 
I0521 18:35:02.708] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.708] W0521 18:23:07.799735    1331 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
I0521 18:35:02.708] May 21 18:23:07.812: INFO: 
... skipping 8 lines ...
I0521 18:35:02.712] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:22
I0521 18:35:02.712]   Run node performance testing with pre-defined workloads
I0521 18:35:02.712]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:111
I0521 18:35:02.712]     NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload [It]
I0521 18:35:02.720]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_perf_test.go:115
I0521 18:35:02.720] 
I0521 18:35:02.721]     Unexpected error:
I0521 18:35:02.721]         <*errors.errorString | 0xc000548e80>: {
I0521 18:35:02.721]             s: "pod ran to completion",
I0521 18:35:02.721]         }
I0521 18:35:02.721]         pod ran to completion
I0521 18:35:02.721]     occurred
I0521 18:35:02.721] 
I0521 18:35:02.721]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:35:02.722] ------------------------------
I0521 18:35:02.722] SSSSS
I0521 18:35:02.722] ------------------------------
I0521 18:35:02.722] [sig-node] ConfigMap 
I0521 18:35:02.722]   should fail to create ConfigMap with empty key [Conformance]
I0521 18:35:02.722]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:35:02.722] [BeforeEach] [sig-node] ConfigMap
I0521 18:35:02.723]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:35:02.723] STEP: Creating a kubernetes client
I0521 18:35:02.723] STEP: Building a namespace api object, basename configmap
I0521 18:35:02.723] May 21 18:23:13.876: INFO: Skipping waiting for service account
I0521 18:35:02.725] [It] should fail to create ConfigMap with empty key [Conformance]
I0521 18:35:02.726]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:35:02.726] STEP: Creating configMap that has name configmap-test-emptyKey-ffa5255d-d7ff-4ffe-a98e-a60c774954e9
I0521 18:35:02.726] [AfterEach] [sig-node] ConfigMap
I0521 18:35:02.726]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:35:02.726] May 21 18:23:13.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:35:02.726] STEP: Destroying namespace "configmap-8927" for this suite.
I0521 18:35:02.727] May 21 18:23:19.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:35:02.727] May 21 18:23:19.991: INFO: namespace configmap-8927 deletion completed in 6.059716118s
I0521 18:35:02.727] 
I0521 18:35:02.727] • [SLOW TEST:6.118 seconds]
I0521 18:35:02.727] [sig-node] ConfigMap
I0521 18:35:02.727] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
I0521 18:35:02.727]   should fail to create ConfigMap with empty key [Conformance]
I0521 18:35:02.728]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:35:02.728] ------------------------------
I0521 18:35:02.728] SSSSS
I0521 18:35:02.728] ------------------------------
I0521 18:35:02.728] [sig-node] RuntimeClass 
I0521 18:35:02.728]   should reject a Pod requesting a deleted RuntimeClass
... skipping 59 lines ...
I0521 18:35:02.740]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:35:02.740] STEP: Creating Pod
I0521 18:35:02.740] STEP: Waiting for the pod running
I0521 18:35:02.740] STEP: Geting the pod
I0521 18:35:02.740] STEP: Reading file content from the nginx-container
I0521 18:35:02.740] May 21 18:24:02.169: INFO: Running ' --server=http://127.0.0.1:8080 exec pod-sharedvolume-0ac49b1d-8440-4922-a071-5d60b13189ca -c busybox-main-container --namespace=emptydir-475 -- cat /usr/share/volumeshare/shareddata.txt'
I0521 18:35:02.741] May 21 18:24:02.169: INFO: Unexpected error occurred: error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-0ac49b1d-8440-4922-a071-5d60b13189ca -c busybox-main-container --namespace=emptydir-475 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc000504d28 0xc000504db0 0xc000504e38] [0xc000504d28 0xc000504db0 0xc000504e38] [0xc000504da8 0xc000504e20] [0xef22d0 0xef22d0] <nil> <nil>}:
I0521 18:35:02.741] Command stdout:
I0521 18:35:02.741] 
I0521 18:35:02.741] stderr:
I0521 18:35:02.741] 
I0521 18:35:02.741] error:
I0521 18:35:02.741] fork/exec : no such file or directory
I0521 18:35:02.742] [AfterEach] [sig-storage] EmptyDir volumes
I0521 18:35:02.742]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:35:02.742] STEP: Collecting events from namespace "emptydir-475".
I0521 18:35:02.742] STEP: Found 6 events.
I0521 18:35:02.742] May 21 18:24:02.171: INFO: At 2019-05-21 18:24:00 +0000 UTC - event for pod-sharedvolume-0ac49b1d-8440-4922-a071-5d60b13189ca: {kubelet tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
... skipping 4 lines ...
I0521 18:35:02.744] May 21 18:24:02.171: INFO: At 2019-05-21 18:24:01 +0000 UTC - event for pod-sharedvolume-0ac49b1d-8440-4922-a071-5d60b13189ca: {kubelet tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911} Started: Started container busybox-sub-container
I0521 18:35:02.744] May 21 18:24:02.173: INFO: POD                                                    NODE                                                  PHASE    GRACE  CONDITIONS
I0521 18:35:02.745] May 21 18:24:02.173: INFO: pod-sharedvolume-0ac49b1d-8440-4922-a071-5d60b13189ca  tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-05-21 18:24:00 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-05-21 18:24:00 +0000 UTC ContainersNotReady containers with unready status: [busybox-sub-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-05-21 18:24:00 +0000 UTC ContainersNotReady containers with unready status: [busybox-sub-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-05-21 18:24:00 +0000 UTC  }]
I0521 18:35:02.745] May 21 18:24:02.173: INFO: 
I0521 18:35:02.745] May 21 18:24:02.176: INFO: 
I0521 18:35:02.745] Logging node info for node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.751] May 21 18:24:02.177: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911,UID:27225a0a-f8a1-423a-87fd-b49e5c8acde3,ResourceVersion:2437,Generation:0,CreationTimestamp:2019-05-21 17:23:28 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911,kubernetes.io/os: linux,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-ltpvp,UID:,ResourceVersion:,KubeletConfigKey:kubelet,},},},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18327040000 0} {<nil>} 17897500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3875430400 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16494335973 0} {<nil>} 16494335973 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3613286400 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-05-21 18:23:03 +0000 UTC 2019-05-21 17:23:25 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-21 18:23:03 +0000 UTC 2019-05-21 17:23:25 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-21 18:23:03 +0000 UTC 2019-05-21 17:23:25 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-21 18:23:03 +0000 UTC 2019-05-21 17:57:28 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.138.0.43} {Hostname tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:710cf1835bf43efaa84b7d55bc700ca9,SystemUUID:710CF183-5BF4-3EFA-A84B-7D55BC700CA9,BootID:554bb999-e92f-41cb-8bd5-cec7d565e71c,KernelVersion:4.14.69-coreos,OSImage:Container Linux by CoreOS 1883.1.0 (Rhyolite),ContainerRuntimeVersion:docker://18.6.1,KubeletVersion:v1.16.0-alpha.0.288+13c11de135833a,KubeProxyVersion:v1.16.0-alpha.0.288+13c11de135833a,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 634170972} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 332011484} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 225358913} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 98707739} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0] 96288249} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0] 96286449} {[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest] 69583040} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:08509a36233c5096bb273a492251a9a5ca28558ab36d74007ca2a9d3f0b61e1d] 18976858} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/audit-proxy@sha256:9cf10c6bb871a9a2a45eb1634ecd36cf0e45ec9bd8ae05bf10bef981ac07cc1b gcr.io/kubernetes-e2e-test-images/audit-proxy:1.0] 13222979} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 10039224} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1] 5494760} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 1113554} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-ltpvp,UID:3b6c48e6-61c5-4639-8bd0-dd4c997cf719,ResourceVersion:2425,KubeletConfigKey:kubelet,},},Active:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-ltpvp,UID:3b6c48e6-61c5-4639-8bd0-dd4c997cf719,ResourceVersion:2425,KubeletConfigKey:kubelet,},},LastKnownGood:&NodeConfigSource{ConfigMap:&ConfigMapNodeConfigSource{Namespace:kube-system,Name:testcfg-8b4br,UID:6fbd442c-2e71-4142-b1a4-1ad9708d4477,ResourceVersion:833,KubeletConfigKey:kubelet,},},Error:,},},}
I0521 18:35:02.751] May 21 18:24:02.178: INFO: 
I0521 18:35:02.752] Logging kubelet events for node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.752] May 21 18:24:02.179: INFO: 
I0521 18:35:02.752] Logging pods the kubelet thinks is on node tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.752] May 21 18:24:02.181: INFO: pod-sharedvolume-0ac49b1d-8440-4922-a071-5d60b13189ca started at 2019-05-21 18:24:00 +0000 UTC (0+2 container statuses recorded)
I0521 18:35:02.752] May 21 18:24:02.181: INFO: 	Container busybox-main-container ready: true, restart count 0
... skipping 9 lines ...
I0521 18:35:02.754] • Failure [8.105 seconds]
I0521 18:35:02.754] [sig-storage] EmptyDir volumes
I0521 18:35:02.754] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
I0521 18:35:02.754]   pod should support shared volumes between containers [Conformance] [It]
I0521 18:35:02.754]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:696
I0521 18:35:02.755] 
I0521 18:35:02.755]   Unexpected error:
I0521 18:35:02.755]       <*errors.errorString | 0xc000cb33d0>: {
I0521 18:35:02.755]           s: "error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-0ac49b1d-8440-4922-a071-5d60b13189ca -c busybox-main-container --namespace=emptydir-475 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc000504d28 0xc000504db0 0xc000504e38] [0xc000504d28 0xc000504db0 0xc000504e38] [0xc000504da8 0xc000504e20] [0xef22d0 0xef22d0] <nil> <nil>}:\nCommand stdout:\n\nstderr:\n\nerror:\nfork/exec : no such file or directory",
I0521 18:35:02.755]       }
I0521 18:35:02.756]       error starting &{ [ --server=http://127.0.0.1:8080 exec pod-sharedvolume-0ac49b1d-8440-4922-a071-5d60b13189ca -c busybox-main-container --namespace=emptydir-475 -- cat /usr/share/volumeshare/shareddata.txt] []  <nil>   [] <nil> <nil> <nil> <nil> <nil> false [0xc000504d28 0xc000504db0 0xc000504e38] [0xc000504d28 0xc000504db0 0xc000504e38] [0xc000504da8 0xc000504e20] [0xef22d0 0xef22d0] <nil> <nil>}:
I0521 18:35:02.756]       Command stdout:
I0521 18:35:02.756]       
I0521 18:35:02.756]       stderr:
I0521 18:35:02.756]       
I0521 18:35:02.756]       error:
I0521 18:35:02.757]       fork/exec : no such file or directory
I0521 18:35:02.757]   occurred
I0521 18:35:02.757] 
I0521 18:35:02.757]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218
I0521 18:35:02.757] ------------------------------
I0521 18:35:02.757] SSSS
... skipping 116 lines ...
I0521 18:35:02.779] 
I0521 18:35:02.779]     Only supported for node OS distro [gci ubuntu custom] (not )
I0521 18:35:02.779] 
I0521 18:35:02.780]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:66
I0521 18:35:02.780] ------------------------------
I0521 18:35:02.780] [sig-storage] ConfigMap 
I0521 18:35:02.780]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:35:02.780]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:550
I0521 18:35:02.780] [BeforeEach] [sig-storage] ConfigMap
I0521 18:35:02.780]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
I0521 18:35:02.781] STEP: Creating a kubernetes client
I0521 18:35:02.781] STEP: Building a namespace api object, basename configmap
I0521 18:35:02.781] May 21 18:25:16.382: INFO: Skipping waiting for service account
I0521 18:35:02.781] [It] Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:35:02.781]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:550
I0521 18:35:02.781] May 21 18:25:16.383: INFO: Couldn't get node TTL annotation (using default value of 0): No TTL annotation found on the node
I0521 18:35:02.782] STEP: Creating the pod
I0521 18:35:02.782] [AfterEach] [sig-storage] ConfigMap
I0521 18:35:02.782]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
I0521 18:35:02.782] May 21 18:30:16.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0521 18:35:02.782] STEP: Destroying namespace "configmap-5624" for this suite.
I0521 18:35:02.782] May 21 18:30:38.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0521 18:35:02.783] May 21 18:30:38.451: INFO: namespace configmap-5624 deletion completed in 22.048547209s
I0521 18:35:02.783] 
I0521 18:35:02.783] • [SLOW TEST:322.072 seconds]
I0521 18:35:02.783] [sig-storage] ConfigMap
I0521 18:35:02.783] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
I0521 18:35:02.783]   Should fail non-optional pod creation due to configMap object does not exist [Slow]
I0521 18:35:02.784]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:550
I0521 18:35:02.784] ------------------------------
I0521 18:35:02.784] [k8s.io] NodeLease when the NodeLease feature is enabled 
I0521 18:35:02.784]   the kubelet should create and update a lease in the kube-node-lease namespace
I0521 18:35:02.784]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49
I0521 18:35:02.784] [BeforeEach] [k8s.io] NodeLease
... skipping 56 lines ...
I0521 18:35:02.796] I0521 18:34:52.859446    1331 server.go:257] Kill server "services"
I0521 18:35:02.796] I0521 18:34:52.859454    1331 server.go:294] Killing process 2047 (services) with -TERM
I0521 18:35:02.796] I0521 18:34:52.938577    1331 server.go:257] Kill server "kubelet"
I0521 18:35:02.796] I0521 18:34:52.947200    1331 services.go:148] Fetching log files...
I0521 18:35:02.796] I0521 18:34:52.947299    1331 services.go:157] Get log file "kern.log" with journalctl command [-k].
I0521 18:35:02.797] I0521 18:34:52.987913    1331 services.go:157] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0521 18:35:02.797] E0521 18:34:52.992432    1331 services.go:160] failed to get "cloud-init.log" from journald: Failed to add filter for units: No data available
I0521 18:35:02.797] , exit status 1
I0521 18:35:02.797] I0521 18:34:52.992470    1331 services.go:157] Get log file "docker.log" with journalctl command [-u docker].
I0521 18:35:02.797] I0521 18:34:52.999879    1331 services.go:157] Get log file "kubelet.log" with journalctl command [-u kubelet-20190521T172155.service].
I0521 18:35:02.798] I0521 18:34:53.075661    1331 e2e_node_suite_test.go:191] Tests Finished
I0521 18:35:02.798] 
I0521 18:35:02.798] 
I0521 18:35:02.798] Summarizing 4 Failures:
I0521 18:35:02.798] 
I0521 18:35:02.798] [Fail] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads [It] TensorFlow workload 
I0521 18:35:02.798] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:35:02.798] 
I0521 18:35:02.799] [Fail] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads [It] NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload 
I0521 18:35:02.799] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:35:02.799] 
I0521 18:35:02.799] [Fail] [sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads [It] NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload 
I0521 18:35:02.800] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:112
I0521 18:35:02.800] 
I0521 18:35:02.800] [Fail] [sig-storage] EmptyDir volumes [It] pod should support shared volumes between containers [Conformance] 
I0521 18:35:02.800] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2218
I0521 18:35:02.800] 
I0521 18:35:02.800] Ran 25 of 301 Specs in 4363.158 seconds
I0521 18:35:02.800] FAIL! -- 21 Passed | 4 Failed | 0 Pending | 276 Skipped
I0521 18:35:02.800] --- FAIL: TestE2eNode (4363.18s)
I0521 18:35:02.801] FAIL
I0521 18:35:02.801] 
I0521 18:35:02.801] Ginkgo ran 1 suite in 1h12m43.571883048s
I0521 18:35:02.801] Test Suite Failed
I0521 18:35:02.801] 
I0521 18:35:02.801] Failure Finished Test Suite on Host tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911
I0521 18:35:02.802] command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.247.77.81 -- sudo sh -c 'cd /tmp/node-e2e-20190521T172155 && timeout -k 30s 18000.000000s ./ginkgo --nodes=1 --skip="\[Flaky\]|\[NodeConformance\]|\[NodeFeature:.+\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]|\[Legacy:.+\]|\[Benchmark\]" ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --logtostderr --v 4 --node-name=tmp-node-e2e-eceb20b6-coreos-beta-1883-1-0-v20180911 --report-dir=/tmp/node-e2e-20190521T172155/results --report-prefix=coreos-beta --image-description="coreos-beta-1883-1-0-v20180911" --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"'] failed with error: exit status 1
I0521 18:35:02.802] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0521 18:35:02.802] <                              FINISH TEST                               <
I0521 18:35:02.802] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0521 18:35:02.803] 
I0521 18:35:02.803] Failure: 4 errors encountered.
W0521 18:35:02.903] exit status 1
W0521 18:35:03.050] 2019/05/21 18:35:03 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-ci-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --skip="\[Flaky\]|\[NodeConformance\]|\[NodeFeature:.+\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]|\[Legacy:.+\]|\[Benchmark\]" --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=5h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml' finished in 1h21m12.91632891s
W0521 18:35:03.050] 2019/05/21 18:35:03 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0521 18:35:03.052] 2019/05/21 18:35:03 node.go:52: Noop - Node Down()
W0521 18:35:03.052] 2019/05/21 18:35:03 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0521 18:35:03.053] 2019/05/21 18:35:03 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0521 18:35:11.529] 2019/05/21 18:35:11 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 8.479715703s
W0521 18:35:11.530] 2019/05/21 18:35:11 main.go:314: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-jkns-ci-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --skip="\[Flaky\]|\[NodeConformance\]|\[NodeFeature:.+\]|\[NodeSpecialFeature:.+\]|\[NodeAlphaFeature:.+\]|\[Legacy:.+\]|\[Benchmark\]" --test_args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/" --test-timeout=5h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml: exit status 1]
W0521 18:35:11.533] Traceback (most recent call last):
W0521 18:35:11.534]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0521 18:35:11.554]     main(parse_args())
W0521 18:35:11.554]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0521 18:35:11.554]     mode.start(runner_args)
W0521 18:35:11.554]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0521 18:35:11.554]     check_env(env, self.command, *args)
W0521 18:35:11.555]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0521 18:35:11.555]     subprocess.check_call(cmd, env=env)
W0521 18:35:11.555]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0521 18:35:11.555]     raise CalledProcessError(retcode, cmd)
W0521 18:35:11.556] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-project=k8s-jkns-ci-node-e2e', '--gcp-zone=us-west1-b', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/image-config.yaml', '--node-test-args=--kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"', '--node-tests=true', '--test_args=--nodes=1 --skip="\\[Flaky\\]|\\[NodeConformance\\]|\\[NodeFeature:.+\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeAlphaFeature:.+\\]|\\[Legacy:.+\\]|\\[Benchmark\\]"', '--timeout=300m')' returned non-zero exit status 1
E0521 18:35:11.566] Command failed
I0521 18:35:11.567] process 309 exited with code 1 after 81.4m
E0521 18:35:11.567] FAIL: ci-kubernetes-node-kubelet-orphans
I0521 18:35:11.568] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0521 18:35:12.487] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0521 18:35:12.552] process 38026 exited with code 0 after 0.0m
I0521 18:35:12.552] Call:  gcloud config get-value account
I0521 18:35:12.997] process 38038 exited with code 0 after 0.0m
I0521 18:35:12.997] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0521 18:35:12.998] Upload result and artifacts...
I0521 18:35:12.998] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-orphans/1130883922029187073
I0521 18:35:12.999] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-orphans/1130883922029187073/artifacts
W0521 18:35:14.478] CommandException: One or more URLs matched no objects.
E0521 18:35:14.848] Command failed
I0521 18:35:14.848] process 38050 exited with code 1 after 0.0m
W0521 18:35:14.848] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-orphans/1130883922029187073/artifacts not exist yet
I0521 18:35:14.849] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-orphans/1130883922029187073/artifacts
I0521 18:35:19.900] process 38192 exited with code 0 after 0.1m
I0521 18:35:19.901] Call:  git rev-parse HEAD
I0521 18:35:19.905] process 38863 exited with code 0 after 0.0m
... skipping 13 lines ...