This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 1 failed / 867 succeeded
Started2019-09-17 16:01
Elapsed8m1s
Revision765f2dc1d8e0ee62310a69f451f3a46e6b0a7d5a
Refs 82703

Test Failures


//pkg/kubelet:go_default_test 0.00s

bazel test //pkg/kubelet:go_default_test
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //pkg/kubelet:go_default_test
-----------------------------------------------------------------------------
I0917 16:08:15.503917      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:15.504845      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:15.505425      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:15.506479      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:15.519601      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:15.527192      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:15.527453      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:15.528203      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:15.536655      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:15.537287      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:15.539398      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:15.540686      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
E0917 16:08:16.557759      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:37285/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0917 16:08:17.558811      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:37285/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0917 16:08:18.559869      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:37285/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0917 16:08:19.560824      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:37285/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0917 16:08:20.561782      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:37285/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I0917 16:08:20.564846      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:20.565589      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:20.565828      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:20.566492      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:20.581455      17 setters.go:539] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-09-17 16:07:50.581276875 +0000 UTC m=-24.607568773 LastTransitionTime:2019-09-17 16:07:50.581276875 +0000 UTC m=-24.607568773 Reason:KubeletNotReady Message:container runtime is down}
E0917 16:08:20.588312      17 kubelet.go:2174] Container runtime sanity check failed: injected runtime status error
E0917 16:08:20.595117      17 kubelet.go:2178] Container runtime status is nil
E0917 16:08:20.601375      17 kubelet.go:2187] Container runtime network not ready: <nil>
E0917 16:08:20.601471      17 kubelet.go:2198] Container runtime not ready: <nil>
E0917 16:08:20.608152      17 kubelet.go:2198] Container runtime not ready: RuntimeReady=false reason: message:
E0917 16:08:20.622171      17 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason: message:
I0917 16:08:20.622488      17 setters.go:539] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-09-17 16:08:20.588297692 +0000 UTC m=+5.399452015 LastTransitionTime:2019-09-17 16:08:20.588297692 +0000 UTC m=+5.399452015 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
E0917 16:08:20.631236      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:08:20.631332      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:08:20.631398      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:08:20.631473      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:08:20.631541      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0917 16:08:20.633651      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:20.640625      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:20.641039      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:20.641305      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:20.663455      17 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
I0917 16:08:20.806646      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:20.807236      17 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0917 16:08:20.807403      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:20.807453      17 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
I0917 16:08:20.811686      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:20.814795      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:20.814852      17 kubelet_node_status.go:202] Controller attach-detach setting changed to false; updating existing Node
I0917 16:08:20.819945      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:20.820306      17 kubelet_node_status.go:205] Controller attach-detach setting changed to true; updating existing Node
E0917 16:08:20.823754      17 kubelet_node_status.go:94] Unable to register node "127.0.0.1" with API server: 
E0917 16:08:20.824803      17 kubelet_node_status.go:100] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I0917 16:08:20.825985      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:20.826029      17 kubelet_node_status.go:202] Controller attach-detach setting changed to false; updating existing Node
E0917 16:08:20.827031      17 kubelet_node_status.go:124] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
I0917 16:08:20.832190      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:20.832836      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:20.833363      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:20.834399      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:20.845168      17 kubelet_node_status.go:139] Zero out resource test.com/resource1 capacity in existing node.
I0917 16:08:20.845603      17 kubelet_node_status.go:139] Zero out resource test.com/resource2 capacity in existing node.
W0917 16:08:20.848537      17 feature_gate.go:208] Setting GA feature gate TaintNodesByCondition=true. It will be removed in a future release.
I0917 16:08:20.949296      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:20.949759      17 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0917 16:08:20.949963      17 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
W0917 16:08:20.950313      17 feature_gate.go:208] Setting GA feature gate TaintNodesByCondition=true. It will be removed in a future release.
I0917 16:08:21.051082      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:21.051569      17 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0917 16:08:21.051804      17 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
W0917 16:08:21.052303      17 feature_gate.go:208] Setting GA feature gate TaintNodesByCondition=true. It will be removed in a future release.
--- FAIL: TestRegisterWithApiServerWithTaint (0.21s)
    feature_gate.go:36: error setting TaintNodesByCondition=false: cannot set feature gate TaintNodesByCondition to false, feature is locked to true
E0917 16:08:21.111648      17 kubelet_pods.go:147] Mount cannot be satisfied for container "", because the volume is missing or the volume mounter is nil: {Name:disk ReadOnly:true MountPath:/mnt/path3 SubPath: MountPropagation:<nil> SubPathExpr:}
E0917 16:08:21.112121      17 kubelet_pods.go:147] Mount cannot be satisfied for container "", because the volume is missing or the volume mounter is nil: {Name:disk ReadOnly:true MountPath:/mnt/path3 SubPath: MountPropagation:<nil> SubPathExpr:}
E0917 16:08:21.113586      17 kubelet_pods.go:108] Block volume cannot be satisfied for container "", because the volume is missing or the volume mapper is nil: {Name:disk DevicePath:/dev/sdaa}
E0917 16:08:21.113919      17 kubelet_pods.go:108] Block volume cannot be satisfied for container "", because the volume is missing or the volume mapper is nil: {Name:disk DevicePath:/dev/sdzz}
W0917 16:08:21.115577      17 feature_gate.go:208] Setting GA feature gate VolumeSubpath=false. It will be removed in a future release.
W0917 16:08:21.115892      17 feature_gate.go:208] Setting GA feature gate VolumeSubpath=true. It will be removed in a future release.
W0917 16:08:21.191950      17 kubelet_pods.go:1772] unable to retrieve pvc : - foo
W0917 16:08:21.193019      17 kubelet_pods.go:1778] unable to retrieve pv foo - foo
E0917 16:08:21.196436      17 kubelet_pods.go:388] hostname for pod:"test-pod" was longer than 63. Truncated hostname to :"1234567.1234567.1234567.1234567.1234567.1234567.1234567.1234567"
E0917 16:08:21.196606      17 kubelet_pods.go:388] hostname for pod:"test-pod" was longer than 63. Truncated hostname to :"1234567.1234567.1234567.1234567.1234567.1234567.1234567.123456."
E0917 16:08:21.196798      17 kubelet_pods.go:388] hostname for pod:"test-pod" was longer than 63. Truncated hostname to :"1234567.1234567.1234567.1234567.1234567.1234567.1234567.123456-"
I0917 16:08:21.198235      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0917 16:08:21.198722      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0917 16:08:21.199131      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0917 16:08:21.199539      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
E0917 16:08:21.201420      17 kubelet.go:1895] Update channel is closed. Exiting the sync loop.
I0917 16:08:21.201471      17 kubelet.go:1822] Starting kubelet main sync loop.
E0917 16:08:21.201562      17 kubelet.go:1895] Update channel is closed. Exiting the sync loop.
W0917 16:08:21.224078      17 predicate.go:74] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
E0917 16:08:21.228246      17 runtime.go:195] invalid container ID: ""
E0917 16:08:21.228355      17 runtime.go:195] invalid container ID: ""
I0917 16:08:21.234384      17 kubelet.go:1647] Trying to delete pod foo_ns 11111111
W0917 16:08:21.234479      17 kubelet.go:1651] Deleted mirror pod "foo_ns(11111111)" because it is outdated
E0917 16:08:21.280791      17 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W0917 16:08:21.285048      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.797379752/pods/pod1uid/volumes" does not exist
W0917 16:08:21.285205      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.797379752/pods/pod1uid/volumes" does not exist
E0917 16:08:21.285398      17 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W0917 16:08:21.288021      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.521521639/pods/pod1uid/volumes" does not exist
W0917 16:08:21.288141      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.521521639/pods/pod1uid/volumes" does not exist
E0917 16:08:21.288329      17 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W0917 16:08:21.293449      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.060995164/pods/pod1uid/volumes" does not exist
W0917 16:08:21.293545      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.060995164/pods/pod1uid/volumes" does not exist
I0917 16:08:21.298344      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:21.298495      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:21.302194      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:08:21.500446      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I0917 16:08:21.501027      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0917 16:08:21.501417      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I0917 16:08:21.501516      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:21.501855      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0917 16:08:21.602777      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0917 16:08:21.602835      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0917 16:08:21.603506      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:21.603748      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:21.604022      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:21.604808      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:21.899116      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:21.902649      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:21.902724      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:21.905634      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:08:22.006394      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I0917 16:08:22.006559      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0917 16:08:22.007498      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I0917 16:08:22.007711      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") from node "127.0.0.1" 
I0917 16:08:22.008197      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I0917 16:08:22.008453      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:22.008466      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0917 16:08:22.109924      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") DevicePath "/dev/vdb-test"
I0917 16:08:22.110573      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") DevicePath "/dev/sdb"
I0917 16:08:22.110868      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") DevicePath "/dev/vdb-test"
I0917 16:08:22.111260      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") DevicePath "/dev/sdb"
I0917 16:08:22.111482      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") device mount path ""
I0917 16:08:22.111511      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") DevicePath "/dev/vdb-test"
I0917 16:08:22.112169      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") DevicePath "/dev/sdb"
I0917 16:08:22.112537      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") device mount path ""
I0917 16:08:22.110930      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") device mount path ""
I0917 16:08:22.203694      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:22.207238      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:22.207498      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:22.210400      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:08:22.408919      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:08:22.409030      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:22.409261      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0917 16:08:22.510446      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0917 16:08:22.510674      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:22.510773      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:22.808474      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:22.811375      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:22.811567      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:22.814518      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:08:23.013207      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:08:23.013304      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:23.013836      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0917 16:08:23.114667      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0917 16:08:23.114908      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:23.115069      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:23.415550      17 reconciler.go:181] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I0917 16:08:23.415916      17 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I0917 16:08:23.516827      17 reconciler.go:294] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0917 16:08:23.516919      17 operation_generator.go:931] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0917 16:08:23.617572      17 reconciler.go:315] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0917 16:08:23.617793      17 operation_generator.go:558] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0917 16:08:23.663258      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:23.666347      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:23.666793      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:23.669535      17 reflector.go:121] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1beta1, Resource=csidrivers } storage.k8s.io/v1beta1, Kind=CSIDriver  { }}
I0917 16:08:23.868449      17 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:08:23.868747      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:23.868879      17 operation_generator.go:1422] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0917 16:08:23.970307      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0917 16:08:23.970732      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:23.970844      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:24.267351      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:24.279372      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:24.279659      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:24.282192      17 reflector.go:121] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1beta1, Resource=csidrivers } storage.k8s.io/v1beta1, Kind=CSIDriver  { }}
I0917 16:08:24.480745      17 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:08:24.481093      17 operation_generator.go:1422] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0917 16:08:24.481441      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:24.582588      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0917 16:08:24.582731      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:24.582813      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:24.883316      17 reconciler.go:181] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I0917 16:08:24.883882      17 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I0917 16:08:24.983914      17 reconciler.go:294] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0917 16:08:24.984081      17 operation_generator.go:931] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0917 16:08:25.084767      17 reconciler.go:301] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
I0917 16:08:25.131687      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
W0917 16:08:25.133660      17 pod_container_deletor.go:75] Container "abc" not found in pod's containers
I0917 16:08:25.307109      17 runonce.go:88] Waiting for 1 pods
I0917 16:08:25.307229      17 runonce.go:123] pod "foo_new(12345678)" containers running
I0917 16:08:25.307690      17 runonce.go:102] started pod "foo_new(12345678)"
I0917 16:08:25.307786      17 runonce.go:108] 1 pods started
FAIL

				from junit_bazel.xml

Find failedpod_foo(4) mentions in log files | View test history on testgrid


Show 867 Passed Tests

Error lines from build-log.txt

... skipping 42 lines ...
[8,750 / 11,097] 334 / 868 tests; GoLink pkg/api/persistentvolumeclaim/linux_amd64_race_stripped/go_default_test; 19s remote ... (148 actions running)
[9,358 / 11,233] 470 / 868 tests; Listing all stable metrics. //test/instrumentation:list_stable_metrics; 41s remote ... (187 actions running)
[10,375 / 11,247] 474 / 868 tests; Listing all stable metrics. //test/instrumentation:list_stable_metrics; 69s remote ... (309 actions running)
[10,898 / 11,310] 522 / 868 tests; GoLink pkg/apis/scheduling/validation/linux_amd64_race_stripped/go_default_test; 63s remote ... (345 actions, 344 running)
[11,168 / 11,427] 639 / 868 tests; GoLink pkg/apis/scheduling/validation/linux_amd64_race_stripped/go_default_test; 100s remote ... (241 actions, 240 running)
[11,466 / 11,552] 782 / 868 tests; GoLink staging/src/k8s.io/apiextensions-apiserver/pkg/controller/openapi/builder/linux_amd64_race_stripped/go_default_test; 141s remote ... (86 actions running)
FAIL: //pkg/kubelet:go_default_test (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_1.log)
[11,625 / 11,629] 864 / 868 tests; Testing //cmd/kubeadm/app/phases/upgrade:go_default_test; 112s remote ... (4 actions running)
FAIL: //pkg/kubelet:go_default_test (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_2.log)

FAILED: //pkg/kubelet:go_default_test (Summary)
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_1.log
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_2.log
FAIL: //pkg/kubelet:go_default_test (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log)
INFO: From Testing //pkg/kubelet:go_default_test:
==================== Test output for //pkg/kubelet:go_default_test:
I0917 16:07:40.248777      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:07:40.249652      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:07:40.250248      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:07:40.251445      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:07:40.263899      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:07:40.266168      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:07:40.266786      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:07:40.268701      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:07:40.283834      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:07:40.291688      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:07:40.292011      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:07:40.292247      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E0917 16:07:41.303150      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:32789/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0917 16:07:42.304231      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:32789/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0917 16:07:43.305276      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:32789/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0917 16:07:44.306282      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:32789/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0917 16:07:45.307370      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:32789/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0917 16:07:45.310961      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:07:45.311512      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:07:45.311840      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:07:45.312532      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:07:45.328598      17 setters.go:539] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-09-17 16:07:15.328407204 +0000 UTC m=-24.567609728 LastTransitionTime:2019-09-17 16:07:15.328407204 +0000 UTC m=-24.567609728 Reason:KubeletNotReady Message:container runtime is down}
E0917 16:07:45.335810      17 kubelet.go:2174] Container runtime sanity check failed: injected runtime status error
E0917 16:07:45.342964      17 kubelet.go:2178] Container runtime status is nil
E0917 16:07:45.349768      17 kubelet.go:2187] Container runtime network not ready: <nil>
E0917 16:07:45.349882      17 kubelet.go:2198] Container runtime not ready: <nil>
E0917 16:07:45.357154      17 kubelet.go:2198] Container runtime not ready: RuntimeReady=false reason: message:
E0917 16:07:45.371466      17 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason: message:
I0917 16:07:45.371865      17 setters.go:539] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-09-17 16:07:45.335798219 +0000 UTC m=+5.439781278 LastTransitionTime:2019-09-17 16:07:45.335798219 +0000 UTC m=+5.439781278 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
E0917 16:07:45.385689      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:07:45.385804      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:07:45.385881      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:07:45.385947      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:07:45.386023      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0917 16:07:45.388095      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:07:45.394401      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:07:45.394650      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:07:45.395375      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:07:45.422603      17 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
I0917 16:07:45.566340      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:07:45.566986      17 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0917 16:07:45.567360      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:07:45.567445      17 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
I0917 16:07:45.571024      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:07:45.572386      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:07:45.572445      17 kubelet_node_status.go:202] Controller attach-detach setting changed to false; updating existing Node
I0917 16:07:45.575655      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:07:45.575701      17 kubelet_node_status.go:205] Controller attach-detach setting changed to true; updating existing Node
E0917 16:07:45.578602      17 kubelet_node_status.go:94] Unable to register node "127.0.0.1" with API server: 
E0917 16:07:45.579851      17 kubelet_node_status.go:100] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I0917 16:07:45.580979      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:07:45.581027      17 kubelet_node_status.go:202] Controller attach-detach setting changed to false; updating existing Node
E0917 16:07:45.581854      17 kubelet_node_status.go:124] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
I0917 16:07:45.587332      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:07:45.587949      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:07:45.588619      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:07:45.589695      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:07:45.607493      17 kubelet_node_status.go:139] Zero out resource test.com/resource1 capacity in existing node.
I0917 16:07:45.608106      17 kubelet_node_status.go:139] Zero out resource test.com/resource2 capacity in existing node.
W0917 16:07:45.610928      17 feature_gate.go:208] Setting GA feature gate TaintNodesByCondition=true. It will be removed in a future release.
I0917 16:07:45.711534      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:07:45.712251      17 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0917 16:07:45.712495      17 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
W0917 16:07:45.712996      17 feature_gate.go:208] Setting GA feature gate TaintNodesByCondition=true. It will be removed in a future release.
I0917 16:07:45.813855      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:07:45.814486      17 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0917 16:07:45.814803      17 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
W0917 16:07:45.815318      17 feature_gate.go:208] Setting GA feature gate TaintNodesByCondition=true. It will be removed in a future release.
--- FAIL: TestRegisterWithApiServerWithTaint (0.21s)
    feature_gate.go:36: error setting TaintNodesByCondition=false: cannot set feature gate TaintNodesByCondition to false, feature is locked to true
E0917 16:07:45.870034      17 kubelet_pods.go:147] Mount cannot be satisfied for container "", because the volume is missing or the volume mounter is nil: {Name:disk ReadOnly:true MountPath:/mnt/path3 SubPath: MountPropagation:<nil> SubPathExpr:}
E0917 16:07:45.870392      17 kubelet_pods.go:147] Mount cannot be satisfied for container "", because the volume is missing or the volume mounter is nil: {Name:disk ReadOnly:true MountPath:/mnt/path3 SubPath: MountPropagation:<nil> SubPathExpr:}
E0917 16:07:45.873032      17 kubelet_pods.go:108] Block volume cannot be satisfied for container "", because the volume is missing or the volume mapper is nil: {Name:disk DevicePath:/dev/sdaa}
E0917 16:07:45.873388      17 kubelet_pods.go:108] Block volume cannot be satisfied for container "", because the volume is missing or the volume mapper is nil: {Name:disk DevicePath:/dev/sdzz}
W0917 16:07:45.875010      17 feature_gate.go:208] Setting GA feature gate VolumeSubpath=false. It will be removed in a future release.
W0917 16:07:45.875407      17 feature_gate.go:208] Setting GA feature gate VolumeSubpath=true. It will be removed in a future release.
... skipping 6 lines ...
I0917 16:07:45.972011      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0917 16:07:45.972488      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0917 16:07:45.972945      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
E0917 16:07:45.975540      17 kubelet.go:1895] Update channel is closed. Exiting the sync loop.
I0917 16:07:45.975612      17 kubelet.go:1822] Starting kubelet main sync loop.
E0917 16:07:45.975713      17 kubelet.go:1895] Update channel is closed. Exiting the sync loop.
W0917 16:07:46.000883      17 predicate.go:74] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
E0917 16:07:46.011658      17 runtime.go:195] invalid container ID: ""
E0917 16:07:46.012349      17 runtime.go:195] invalid container ID: ""
I0917 16:07:46.020572      17 kubelet.go:1647] Trying to delete pod foo_ns 11111111
W0917 16:07:46.020658      17 kubelet.go:1651] Deleted mirror pod "foo_ns(11111111)" because it is outdated
W0917 16:07:46.068420      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.727016921/pods/pod1uid/volumes" does not exist
W0917 16:07:46.068541      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.727016921/pods/pod1uid/volumes" does not exist
... skipping 3 lines ...
E0917 16:07:46.085621      17 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W0917 16:07:46.088862      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.039602775/pods/pod1uid/volumes" does not exist
W0917 16:07:46.089002      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.039602775/pods/pod1uid/volumes" does not exist
E0917 16:07:46.089266      17 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
I0917 16:07:46.092129      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:07:46.092311      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:07:46.095678      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:07:46.193954      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I0917 16:07:46.194194      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:07:46.194705      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0917 16:07:46.295040      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I0917 16:07:46.295443      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0917 16:07:46.296142      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
... skipping 2 lines ...
I0917 16:07:46.396685      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0917 16:07:46.397188      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:07:46.397307      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") device mount path ""
I0917 16:07:46.694483      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:07:46.696792      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:07:46.696803      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:07:46.700109      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:07:46.898819      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I0917 16:07:46.899131      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I0917 16:07:46.899434      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0917 16:07:46.899647      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0917 16:07:46.899459      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I0917 16:07:46.900386      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") from node "127.0.0.1" 
... skipping 7 lines ...
I0917 16:07:47.003327      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") DevicePath "/dev/vdb-test"
I0917 16:07:47.003439      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") DevicePath "/dev/sdb"
I0917 16:07:47.003534      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") device mount path ""
I0917 16:07:47.297574      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:07:47.301096      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:07:47.301201      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:07:47.304207      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:07:47.502317      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:07:47.502683      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:07:47.503070      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0917 16:07:47.603749      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0917 16:07:47.603907      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:07:47.604268      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0917 16:07:47.902908      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:07:47.905427      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:07:47.905506      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:07:47.909042      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:07:48.107172      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:07:48.107597      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:07:48.107965      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0917 16:07:48.208715      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0917 16:07:48.209127      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:07:48.209473      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
... skipping 3 lines ...
I0917 16:07:48.610430      17 operation_generator.go:931] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0917 16:07:48.711347      17 reconciler.go:315] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0917 16:07:48.711609      17 operation_generator.go:558] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0917 16:07:48.758221      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:07:48.761729      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:07:48.762217      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:07:48.765231      17 reflector.go:121] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1beta1, Resource=csidrivers } storage.k8s.io/v1beta1, Kind=CSIDriver  { }}
I0917 16:07:48.963396      17 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:07:48.963916      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:07:48.963972      17 operation_generator.go:1422] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0917 16:07:49.065334      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0917 16:07:49.065497      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:07:49.065856      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0917 16:07:49.362848      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:07:49.367087      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:07:49.367087      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:07:49.370086      17 reflector.go:121] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1beta1, Resource=csidrivers } storage.k8s.io/v1beta1, Kind=CSIDriver  { }}
I0917 16:07:49.469049      17 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:07:49.469462      17 reconciler.go:154] Reconciler: start to sync state
E0917 16:07:49.469934      17 nestedpendingoperations.go:270] Operation for "\"fake/fake-device\"" failed. No retries permitted until 2019-09-17 16:07:49.969696678 +0000 UTC m=+10.073679757 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"vol1\" (UniqueName: \"fake/fake-device\") pod \"foo\" (UID: \"12345678\") "
I0917 16:07:49.971652      17 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:07:49.971787      17 operation_generator.go:1422] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0917 16:07:50.072779      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0917 16:07:50.072914      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:07:50.073161      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0917 16:07:50.251027      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:07:50.251643      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:07:50.267483      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:07:50.267819      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:07:50.273214      17 reconciler.go:181] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I0917 16:07:50.273735      17 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I0917 16:07:50.292524      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:07:50.293077      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E0917 16:07:50.370765      17 reflector.go:121] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1beta1, Resource=csidrivers } storage.k8s.io/v1beta1, Kind=CSIDriver  { }}
I0917 16:07:50.373770      17 reconciler.go:294] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0917 16:07:50.373800      17 operation_generator.go:931] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0917 16:07:50.474405      17 reconciler.go:301] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
I0917 16:07:50.518626      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
W0917 16:07:50.520956      17 pod_container_deletor.go:75] Container "abc" not found in pod's containers
I0917 16:07:50.693932      17 runonce.go:88] Waiting for 1 pods
I0917 16:07:50.694019      17 runonce.go:123] pod "foo_new(12345678)" containers running
I0917 16:07:50.694630      17 runonce.go:102] started pod "foo_new(12345678)"
I0917 16:07:50.694706      17 runonce.go:108] 1 pods started
FAIL
================================================================================
==================== Test output for //pkg/kubelet:go_default_test:
I0917 16:07:59.478053      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:07:59.479036      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:07:59.479584      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:07:59.480641      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:07:59.493753      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:07:59.501108      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:07:59.501773      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:07:59.502280      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:07:59.508633      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:07:59.516385      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:07:59.516848      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:07:59.517159      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E0917 16:08:00.526860      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:36789/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0917 16:08:01.528097      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:36789/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0917 16:08:02.529367      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:36789/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0917 16:08:03.530507      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:36789/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0917 16:08:04.531645      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:36789/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I0917 16:08:04.535103      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:04.536354      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:04.536700      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:04.537390      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:04.556002      17 setters.go:539] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-09-17 16:07:34.555756629 +0000 UTC m=-24.624794554 LastTransitionTime:2019-09-17 16:07:34.555756629 +0000 UTC m=-24.624794554 Reason:KubeletNotReady Message:container runtime is down}
E0917 16:08:04.564054      17 kubelet.go:2174] Container runtime sanity check failed: injected runtime status error
E0917 16:08:04.571893      17 kubelet.go:2178] Container runtime status is nil
E0917 16:08:04.579042      17 kubelet.go:2187] Container runtime network not ready: <nil>
E0917 16:08:04.579163      17 kubelet.go:2198] Container runtime not ready: <nil>
E0917 16:08:04.586370      17 kubelet.go:2198] Container runtime not ready: RuntimeReady=false reason: message:
E0917 16:08:04.605133      17 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason: message:
I0917 16:08:04.605429      17 setters.go:539] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-09-17 16:08:04.564036096 +0000 UTC m=+5.383484903 LastTransitionTime:2019-09-17 16:08:04.564036096 +0000 UTC m=+5.383484903 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
E0917 16:08:04.614816      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:08:04.614907      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:08:04.615005      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:08:04.615109      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:08:04.615178      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0917 16:08:04.617222      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:04.623860      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:04.624145      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:04.624771      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:04.651415      17 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
I0917 16:08:04.796409      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:04.797252      17 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0917 16:08:04.797610      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:04.797669      17 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
I0917 16:08:04.801888      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:04.802981      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:04.803034      17 kubelet_node_status.go:202] Controller attach-detach setting changed to false; updating existing Node
I0917 16:08:04.806551      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:04.806599      17 kubelet_node_status.go:205] Controller attach-detach setting changed to true; updating existing Node
E0917 16:08:04.809499      17 kubelet_node_status.go:94] Unable to register node "127.0.0.1" with API server: 
E0917 16:08:04.810471      17 kubelet_node_status.go:100] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I0917 16:08:04.811440      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:04.811487      17 kubelet_node_status.go:202] Controller attach-detach setting changed to false; updating existing Node
E0917 16:08:04.812456      17 kubelet_node_status.go:124] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
I0917 16:08:04.817893      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:04.818764      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:04.819264      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:04.820141      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:04.839690      17 kubelet_node_status.go:139] Zero out resource test.com/resource1 capacity in existing node.
I0917 16:08:04.839869      17 kubelet_node_status.go:139] Zero out resource test.com/resource2 capacity in existing node.
W0917 16:08:04.843153      17 feature_gate.go:208] Setting GA feature gate TaintNodesByCondition=true. It will be removed in a future release.
I0917 16:08:04.943924      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:04.944671      17 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0917 16:08:04.944942      17 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
W0917 16:08:04.945356      17 feature_gate.go:208] Setting GA feature gate TaintNodesByCondition=true. It will be removed in a future release.
I0917 16:08:05.046232      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:05.046999      17 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0917 16:08:05.047211      17 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
W0917 16:08:05.047720      17 feature_gate.go:208] Setting GA feature gate TaintNodesByCondition=true. It will be removed in a future release.
--- FAIL: TestRegisterWithApiServerWithTaint (0.21s)
    feature_gate.go:36: error setting TaintNodesByCondition=false: cannot set feature gate TaintNodesByCondition to false, feature is locked to true
E0917 16:08:05.104354      17 kubelet_pods.go:147] Mount cannot be satisfied for container "", because the volume is missing or the volume mounter is nil: {Name:disk ReadOnly:true MountPath:/mnt/path3 SubPath: MountPropagation:<nil> SubPathExpr:}
E0917 16:08:05.105029      17 kubelet_pods.go:147] Mount cannot be satisfied for container "", because the volume is missing or the volume mounter is nil: {Name:disk ReadOnly:true MountPath:/mnt/path3 SubPath: MountPropagation:<nil> SubPathExpr:}
E0917 16:08:05.106886      17 kubelet_pods.go:108] Block volume cannot be satisfied for container "", because the volume is missing or the volume mapper is nil: {Name:disk DevicePath:/dev/sdaa}
E0917 16:08:05.107312      17 kubelet_pods.go:108] Block volume cannot be satisfied for container "", because the volume is missing or the volume mapper is nil: {Name:disk DevicePath:/dev/sdzz}
W0917 16:08:05.109392      17 feature_gate.go:208] Setting GA feature gate VolumeSubpath=false. It will be removed in a future release.
W0917 16:08:05.109703      17 feature_gate.go:208] Setting GA feature gate VolumeSubpath=true. It will be removed in a future release.
... skipping 6 lines ...
I0917 16:08:05.201742      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0917 16:08:05.202142      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0917 16:08:05.202547      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
E0917 16:08:05.205296      17 kubelet.go:1895] Update channel is closed. Exiting the sync loop.
I0917 16:08:05.205356      17 kubelet.go:1822] Starting kubelet main sync loop.
E0917 16:08:05.205434      17 kubelet.go:1895] Update channel is closed. Exiting the sync loop.
W0917 16:08:05.229720      17 predicate.go:74] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
E0917 16:08:05.238573      17 runtime.go:195] invalid container ID: ""
E0917 16:08:05.239233      17 runtime.go:195] invalid container ID: ""
I0917 16:08:05.247248      17 kubelet.go:1647] Trying to delete pod foo_ns 11111111
W0917 16:08:05.247336      17 kubelet.go:1651] Deleted mirror pod "foo_ns(11111111)" because it is outdated
W0917 16:08:05.301069      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.910006466/pods/pod1uid/volumes" does not exist
W0917 16:08:05.301190      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.910006466/pods/pod1uid/volumes" does not exist
... skipping 3 lines ...
E0917 16:08:05.317077      17 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W0917 16:08:05.319780      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.494558486/pods/pod1uid/volumes" does not exist
W0917 16:08:05.319891      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.494558486/pods/pod1uid/volumes" does not exist
E0917 16:08:05.319993      17 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
I0917 16:08:05.322733      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:05.322832      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:05.326521      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:08:05.524987      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I0917 16:08:05.525244      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0917 16:08:05.526113      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I0917 16:08:05.526387      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:05.526927      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0917 16:08:05.627543      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
... skipping 2 lines ...
I0917 16:08:05.628399      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:05.628723      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:05.629131      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:05.923957      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:05.926748      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:05.926849      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:05.929430      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:08:06.128085      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I0917 16:08:06.128287      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0917 16:08:06.128439      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I0917 16:08:06.128770      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I0917 16:08:06.128781      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0917 16:08:06.128830      17 reconciler.go:154] Reconciler: start to sync state
... skipping 7 lines ...
I0917 16:08:06.232019      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") DevicePath "/dev/vdb-test"
I0917 16:08:06.232173      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") DevicePath "/dev/sdb"
I0917 16:08:06.232330      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") device mount path ""
I0917 16:08:06.527568      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:06.531300      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:06.531403      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:06.534580      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:08:06.733232      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:08:06.733524      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:06.733381      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0917 16:08:06.834997      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0917 16:08:06.835213      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:06.835343      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:07.132610      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:07.135826      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
I0917 16:08:07.135827      17 volume_manager.go:249] Starting Kubelet Volume Manager
E0917 16:08:07.138743      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:08:07.337610      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:08:07.338203      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:07.337738      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0917 16:08:07.439822      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0917 16:08:07.440537      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:07.440716      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
... skipping 3 lines ...
I0917 16:08:07.841717      17 operation_generator.go:931] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0917 16:08:07.942255      17 reconciler.go:315] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0917 16:08:07.942344      17 operation_generator.go:558] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0917 16:08:07.989429      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:07.991970      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:07.992128      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:07.995606      17 reflector.go:121] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1beta1, Resource=csidrivers } storage.k8s.io/v1beta1, Kind=CSIDriver  { }}
I0917 16:08:08.194090      17 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:08:08.194248      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:08.194937      17 operation_generator.go:1422] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0917 16:08:08.295850      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0917 16:08:08.296009      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:08.296193      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:08.593597      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:08.597688      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:08.597866      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:08.600864      17 reflector.go:121] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1beta1, Resource=csidrivers } storage.k8s.io/v1beta1, Kind=CSIDriver  { }}
I0917 16:08:08.799475      17 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:08:08.800031      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:08.799696      17 operation_generator.go:1422] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0917 16:08:08.901329      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0917 16:08:08.901450      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:08.901686      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
... skipping 2 lines ...
I0917 16:08:09.302608      17 reconciler.go:294] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0917 16:08:09.302773      17 operation_generator.go:931] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0917 16:08:09.403347      17 reconciler.go:301] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
I0917 16:08:09.449646      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
W0917 16:08:09.452463      17 pod_container_deletor.go:75] Container "abc" not found in pod's containers
I0917 16:08:09.480098      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:09.480658      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:09.502739      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:09.503389      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:09.517466      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:09.517906      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:09.632553      17 runonce.go:88] Waiting for 1 pods
I0917 16:08:09.632730      17 runonce.go:123] pod "foo_new(12345678)" containers running
I0917 16:08:09.633244      17 runonce.go:102] started pod "foo_new(12345678)"
I0917 16:08:09.633430      17 runonce.go:108] 1 pods started
FAIL
================================================================================
==================== Test output for //pkg/kubelet:go_default_test:
I0917 16:08:15.503917      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:15.504845      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:15.505425      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:15.506479      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:15.519601      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:15.527192      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:15.527453      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:15.528203      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:15.536655      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:15.537287      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:15.539398      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:15.540686      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
E0917 16:08:16.557759      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:37285/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0917 16:08:17.558811      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:37285/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0917 16:08:18.559869      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:37285/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0917 16:08:19.560824      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:37285/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0917 16:08:20.561782      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:37285/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I0917 16:08:20.564846      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:20.565589      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:20.565828      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:20.566492      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:20.581455      17 setters.go:539] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-09-17 16:07:50.581276875 +0000 UTC m=-24.607568773 LastTransitionTime:2019-09-17 16:07:50.581276875 +0000 UTC m=-24.607568773 Reason:KubeletNotReady Message:container runtime is down}
E0917 16:08:20.588312      17 kubelet.go:2174] Container runtime sanity check failed: injected runtime status error
E0917 16:08:20.595117      17 kubelet.go:2178] Container runtime status is nil
E0917 16:08:20.601375      17 kubelet.go:2187] Container runtime network not ready: <nil>
E0917 16:08:20.601471      17 kubelet.go:2198] Container runtime not ready: <nil>
E0917 16:08:20.608152      17 kubelet.go:2198] Container runtime not ready: RuntimeReady=false reason: message:
E0917 16:08:20.622171      17 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason: message:
I0917 16:08:20.622488      17 setters.go:539] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-09-17 16:08:20.588297692 +0000 UTC m=+5.399452015 LastTransitionTime:2019-09-17 16:08:20.588297692 +0000 UTC m=+5.399452015 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
E0917 16:08:20.631236      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:08:20.631332      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:08:20.631398      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:08:20.631473      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0917 16:08:20.631541      17 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0917 16:08:20.633651      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:20.640625      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:20.641039      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:20.641305      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:20.663455      17 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
I0917 16:08:20.806646      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:20.807236      17 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0917 16:08:20.807403      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:20.807453      17 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
I0917 16:08:20.811686      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:20.814795      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:20.814852      17 kubelet_node_status.go:202] Controller attach-detach setting changed to false; updating existing Node
I0917 16:08:20.819945      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:20.820306      17 kubelet_node_status.go:205] Controller attach-detach setting changed to true; updating existing Node
E0917 16:08:20.823754      17 kubelet_node_status.go:94] Unable to register node "127.0.0.1" with API server: 
E0917 16:08:20.824803      17 kubelet_node_status.go:100] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I0917 16:08:20.825985      17 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0917 16:08:20.826029      17 kubelet_node_status.go:202] Controller attach-detach setting changed to false; updating existing Node
E0917 16:08:20.827031      17 kubelet_node_status.go:124] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
I0917 16:08:20.832190      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:20.832836      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0917 16:08:20.833363      17 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0917 16:08:20.834399      17 plugin_manager.go:116] Starting Kubelet Plugin Manager
I0917 16:08:20.845168      17 kubelet_node_status.go:139] Zero out resource test.com/resource1 capacity in existing node.
I0917 16:08:20.845603      17 kubelet_node_status.go:139] Zero out resource test.com/resource2 capacity in existing node.
W0917 16:08:20.848537      17 feature_gate.go:208] Setting GA feature gate TaintNodesByCondition=true. It will be removed in a future release.
I0917 16:08:20.949296      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:20.949759      17 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0917 16:08:20.949963      17 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
W0917 16:08:20.950313      17 feature_gate.go:208] Setting GA feature gate TaintNodesByCondition=true. It will be removed in a future release.
I0917 16:08:21.051082      17 kubelet_node_status.go:289] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0917 16:08:21.051569      17 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0917 16:08:21.051804      17 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
W0917 16:08:21.052303      17 feature_gate.go:208] Setting GA feature gate TaintNodesByCondition=true. It will be removed in a future release.
--- FAIL: TestRegisterWithApiServerWithTaint (0.21s)
    feature_gate.go:36: error setting TaintNodesByCondition=false: cannot set feature gate TaintNodesByCondition to false, feature is locked to true
E0917 16:08:21.111648      17 kubelet_pods.go:147] Mount cannot be satisfied for container "", because the volume is missing or the volume mounter is nil: {Name:disk ReadOnly:true MountPath:/mnt/path3 SubPath: MountPropagation:<nil> SubPathExpr:}
E0917 16:08:21.112121      17 kubelet_pods.go:147] Mount cannot be satisfied for container "", because the volume is missing or the volume mounter is nil: {Name:disk ReadOnly:true MountPath:/mnt/path3 SubPath: MountPropagation:<nil> SubPathExpr:}
E0917 16:08:21.113586      17 kubelet_pods.go:108] Block volume cannot be satisfied for container "", because the volume is missing or the volume mapper is nil: {Name:disk DevicePath:/dev/sdaa}
E0917 16:08:21.113919      17 kubelet_pods.go:108] Block volume cannot be satisfied for container "", because the volume is missing or the volume mapper is nil: {Name:disk DevicePath:/dev/sdzz}
W0917 16:08:21.115577      17 feature_gate.go:208] Setting GA feature gate VolumeSubpath=false. It will be removed in a future release.
W0917 16:08:21.115892      17 feature_gate.go:208] Setting GA feature gate VolumeSubpath=true. It will be removed in a future release.
... skipping 6 lines ...
I0917 16:08:21.198722      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0917 16:08:21.199131      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0917 16:08:21.199539      17 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
E0917 16:08:21.201420      17 kubelet.go:1895] Update channel is closed. Exiting the sync loop.
I0917 16:08:21.201471      17 kubelet.go:1822] Starting kubelet main sync loop.
E0917 16:08:21.201562      17 kubelet.go:1895] Update channel is closed. Exiting the sync loop.
W0917 16:08:21.224078      17 predicate.go:74] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
E0917 16:08:21.228246      17 runtime.go:195] invalid container ID: ""
E0917 16:08:21.228355      17 runtime.go:195] invalid container ID: ""
I0917 16:08:21.234384      17 kubelet.go:1647] Trying to delete pod foo_ns 11111111
W0917 16:08:21.234479      17 kubelet.go:1651] Deleted mirror pod "foo_ns(11111111)" because it is outdated
E0917 16:08:21.280791      17 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W0917 16:08:21.285048      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.797379752/pods/pod1uid/volumes" does not exist
... skipping 3 lines ...
W0917 16:08:21.288141      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.521521639/pods/pod1uid/volumes" does not exist
E0917 16:08:21.288329      17 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W0917 16:08:21.293449      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.060995164/pods/pod1uid/volumes" does not exist
W0917 16:08:21.293545      17 kubelet_getters.go:292] Path "/tmp/kubelet_test.060995164/pods/pod1uid/volumes" does not exist
I0917 16:08:21.298344      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:21.298495      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:21.302194      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:08:21.500446      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I0917 16:08:21.501027      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0917 16:08:21.501417      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I0917 16:08:21.501516      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:21.501855      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0917 16:08:21.602777      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
... skipping 2 lines ...
I0917 16:08:21.603748      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:21.604022      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:21.604808      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:21.899116      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:21.902649      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:21.902724      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:21.905634      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:08:22.006394      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I0917 16:08:22.006559      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0917 16:08:22.007498      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I0917 16:08:22.007711      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") from node "127.0.0.1" 
I0917 16:08:22.008197      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I0917 16:08:22.008453      17 reconciler.go:154] Reconciler: start to sync state
... skipping 7 lines ...
I0917 16:08:22.112169      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") DevicePath "/dev/sdb"
I0917 16:08:22.112537      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") device mount path ""
I0917 16:08:22.110930      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") device mount path ""
I0917 16:08:22.203694      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:22.207238      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:22.207498      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:22.210400      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:08:22.408919      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:08:22.409030      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:22.409261      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0917 16:08:22.510446      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0917 16:08:22.510674      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:22.510773      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:22.808474      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:22.811375      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:22.811567      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:22.814518      17 reflector.go:275] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0917 16:08:23.013207      17 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:08:23.013304      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:23.013836      17 operation_generator.go:390] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0917 16:08:23.114667      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0917 16:08:23.114908      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:23.115069      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
... skipping 3 lines ...
I0917 16:08:23.516919      17 operation_generator.go:931] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0917 16:08:23.617572      17 reconciler.go:315] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0917 16:08:23.617793      17 operation_generator.go:558] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0917 16:08:23.663258      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:23.666347      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:23.666793      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:23.669535      17 reflector.go:121] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1beta1, Resource=csidrivers } storage.k8s.io/v1beta1, Kind=CSIDriver  { }}
I0917 16:08:23.868449      17 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:08:23.868747      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:23.868879      17 operation_generator.go:1422] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0917 16:08:23.970307      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0917 16:08:23.970732      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:23.970844      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0917 16:08:24.267351      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
I0917 16:08:24.279372      17 volume_manager.go:249] Starting Kubelet Volume Manager
I0917 16:08:24.279659      17 desired_state_of_world_populator.go:131] Desired state populator starts to run
E0917 16:08:24.282192      17 reflector.go:121] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1beta1, Resource=csidrivers } storage.k8s.io/v1beta1, Kind=CSIDriver  { }}
I0917 16:08:24.480745      17 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0917 16:08:24.481093      17 operation_generator.go:1422] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0917 16:08:24.481441      17 reconciler.go:154] Reconciler: start to sync state
I0917 16:08:24.582588      17 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0917 16:08:24.582731      17 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0917 16:08:24.582813      17 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
... skipping 5 lines ...
I0917 16:08:25.131687      17 volume_manager.go:260] Shutting down Kubelet Volume Manager
W0917 16:08:25.133660      17 pod_container_deletor.go:75] Container "abc" not found in pod's containers
I0917 16:08:25.307109      17 runonce.go:88] Waiting for 1 pods
I0917 16:08:25.307229      17 runonce.go:123] pod "foo_new(12345678)" containers running
I0917 16:08:25.307690      17 runonce.go:102] started pod "foo_new(12345678)"
I0917 16:08:25.307786      17 runonce.go:108] 1 pods started
FAIL
================================================================================
[11,628 / 11,629] 867 / 868 tests, 1 failed; Testing //pkg/master:go_default_test; 83s remote
INFO: Elapsed time: 477.526s, Critical Path: 378.73s
INFO: 10713 processes: 9556 remote cache hit, 1157 remote.
INFO: Build completed, 1 test FAILED, 11629 total actions
//cluster:clientbin_test                                        (cached) PASSED in 0.5s
//cluster:common_test                                           (cached) PASSED in 0.3s
//cluster:kube-util_test                                        (cached) PASSED in 4.6s
//cluster/gce/cos:go_default_test                               (cached) PASSED in 0.0s
//cluster/gce/custom:go_default_test                            (cached) PASSED in 0.0s
//cluster/gce/gci:go_default_test                               (cached) PASSED in 0.1s
... skipping 855 lines ...
//plugin/pkg/auth/authorizer/rbac:go_default_test                        PASSED in 4.3s
//plugin/pkg/auth/authorizer/rbac/bootstrappolicy:go_default_test        PASSED in 9.8s
//test/e2e/framework/node:go_default_test                                PASSED in 6.7s
//test/e2e/framework/providers/gce:go_default_test                       PASSED in 5.6s
//test/e2e/framework/timer:go_default_test                               PASSED in 5.5s
//test/e2e/storage/external:go_default_test                              PASSED in 5.9s
//pkg/kubelet:go_default_test                                            FAILED in 5 out of 5 in 17.2s
  Stats over 3 runs: max = 17.2s, min = 15.8s, avg = 16.6s, dev = 0.6s
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_1.log
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_2.log

Executed 293 out of 868 tests: 867 tests pass and 1 fails remotely.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
INFO: Build completed, 1 test FAILED, 11629 total actions
+ ../test-infra/hack/coalesce.py
+ exit 3