This job view page is being replaced by Spyglass soon. Check out the new job view.
PRgnufied: {WIP} Capture total mount/unmount operation time
ResultFAILURE
Tests 1 failed / 819 succeeded
Started2019-03-15 16:12
Elapsed12m31s
Revision
Buildergke-prow-containerd-pool-99179761-j2kg
Refs master:b0494b08
75384:663494bd
pod16a80de5-473d-11e9-82d4-0a580a6c0fcd
infra-commit26f7e332d
pod16a80de5-473d-11e9-82d4-0a580a6c0fcd
repok8s.io/kubernetes
repo-commita8142faf87c3e704a2e63c29c1a0e1b4f89c7859
repos{u'k8s.io/kubernetes': u'master:b0494b081d5c97c21115cd2921f7c5b536470591,75384:663494bd33d0b8c950319bf41daec80edbd47a8b'}

Test Failures


//pkg/kubelet:go_default_test 0.00s

bazel test //pkg/kubelet:go_default_test
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //pkg/kubelet:go_default_test
-----------------------------------------------------------------------------
I0315 16:24:35.018962      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:35.020040      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0315 16:24:35.021655      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:35.034050      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:35.034985      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0315 16:24:35.035310      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:35.044981      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:35.045722      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0315 16:24:35.046003      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E0315 16:24:36.058550      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40623/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0315 16:24:37.060198      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40623/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0315 16:24:38.061760      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40623/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0315 16:24:39.063024      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40623/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0315 16:24:40.064290      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40623/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:40.067937      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:40.068532      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0315 16:24:40.069036      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:40.080113      15 setters.go:521] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-15 16:24:10.079881863 +0000 UTC m=-24.636034802 LastTransitionTime:2019-03-15 16:24:10.079881863 +0000 UTC m=-24.636034802 Reason:KubeletNotReady Message:container runtime is down.}
E0315 16:24:40.088063      15 kubelet.go:2157] Container runtime sanity check failed: injected runtime status error
E0315 16:24:40.093610      15 kubelet.go:2161] Container runtime status is nil
E0315 16:24:40.099129      15 kubelet.go:2170] Container runtime network not ready: <nil>
E0315 16:24:40.099275      15 kubelet.go:2182] Container runtime not ready: <nil>
E0315 16:24:40.104569      15 kubelet.go:2182] Container runtime not ready: RuntimeReady=false reason: message:
E0315 16:24:40.114988      15 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason: message:
I0315 16:24:40.115305      15 setters.go:521] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-15 16:24:40.088044678 +0000 UTC m=+5.372128018 LastTransitionTime:2019-03-15 16:24:40.088044678 +0000 UTC m=+5.372128018 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
E0315 16:24:40.123113      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0315 16:24:40.123240      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0315 16:24:40.123298      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0315 16:24:40.123370      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0315 16:24:40.123419      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:40.125389      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:40.126224      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0315 16:24:40.126718      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:40.149275      15 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24
I0315 16:24:40.290083      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:40.290592      15 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0315 16:24:40.290854      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:40.290935      15 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
I0315 16:24:40.293732      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:40.294428      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:40.294488      15 kubelet_node_status.go:199] Controller attach-detach setting changed to false; updating existing Node
I0315 16:24:40.298020      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:40.298079      15 kubelet_node_status.go:202] Controller attach-detach setting changed to true; updating existing Node
E0315 16:24:40.300731      15 kubelet_node_status.go:94] Unable to register node "127.0.0.1" with API server: 
E0315 16:24:40.301372      15 kubelet_node_status.go:100] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I0315 16:24:40.302950      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:40.303052      15 kubelet_node_status.go:199] Controller attach-detach setting changed to false; updating existing Node
E0315 16:24:40.305037      15 kubelet_node_status.go:124] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
I0315 16:24:40.310153      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:40.310719      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0315 16:24:40.311015      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:40.321287      15 kubelet_node_status.go:137] Zero out resource test.com/resource1 capacity in existing node.
I0315 16:24:40.321518      15 kubelet_node_status.go:137] Zero out resource test.com/resource2 capacity in existing node.
I0315 16:24:40.424943      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:40.425304      15 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0315 16:24:40.425447      15 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
I0315 16:24:40.526309      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:40.526726      15 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0315 16:24:40.526849      15 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
E0315 16:24:40.541106      15 kubelet_pods.go:147] Mount cannot be satisfied for container "", because the volume is missing or the volume mounter is nil: {Name:disk ReadOnly:true MountPath:/mnt/path3 SubPath: MountPropagation:<nil> SubPathExpr:}
E0315 16:24:40.541468      15 kubelet_pods.go:147] Mount cannot be satisfied for container "", because the volume is missing or the volume mounter is nil: {Name:disk ReadOnly:true MountPath:/mnt/path3 SubPath: MountPropagation:<nil> SubPathExpr:}
E0315 16:24:40.544896      15 kubelet_pods.go:108] Block volume cannot be satisfied for container "", because the volume is missing or the volume mapper is nil: {Name:disk DevicePath:/dev/sdaa}
E0315 16:24:40.545219      15 kubelet_pods.go:108] Block volume cannot be satisfied for container "", because the volume is missing or the volume mapper is nil: {Name:disk DevicePath:/dev/sdzz}
W0315 16:24:40.546984      15 feature_gate.go:218] Setting GA feature gate VolumeSubpath=false. It will be removed in a future release.
W0315 16:24:40.547172      15 feature_gate.go:218] Setting GA feature gate VolumeSubpath=true. It will be removed in a future release.
W0315 16:24:40.611564      15 kubelet_pods.go:1752] unable to retrieve pvc : - foo
W0315 16:24:40.612319      15 kubelet_pods.go:1758] unable to retrieve pv foo - foo
E0315 16:24:40.617762      15 kubelet_pods.go:388] hostname for pod:"test-pod" was longer than 63. Truncated hostname to :"1234567.1234567.1234567.1234567.1234567.1234567.1234567.1234567"
E0315 16:24:40.617970      15 kubelet_pods.go:388] hostname for pod:"test-pod" was longer than 63. Truncated hostname to :"1234567.1234567.1234567.1234567.1234567.1234567.1234567.123456."
E0315 16:24:40.618056      15 kubelet_pods.go:388] hostname for pod:"test-pod" was longer than 63. Truncated hostname to :"1234567.1234567.1234567.1234567.1234567.1234567.1234567.123456-"
I0315 16:24:40.619225      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0315 16:24:40.619654      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0315 16:24:40.619940      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0315 16:24:40.620214      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
E0315 16:24:40.622623      15 kubelet.go:1879] Update channel is closed. Exiting the sync loop.
I0315 16:24:40.622717      15 kubelet.go:1806] Starting kubelet main sync loop.
E0315 16:24:40.622821      15 kubelet.go:1879] Update channel is closed. Exiting the sync loop.
W0315 16:24:40.649438      15 predicate.go:74] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
E0315 16:24:40.653578      15 runtime.go:195] invalid container ID: ""
E0315 16:24:40.653724      15 runtime.go:195] invalid container ID: ""
W0315 16:24:40.660919      15 kubelet.go:1624] Deleting mirror pod "foo_ns(11111111)" because it is outdated
W0315 16:24:40.723298      15 kubelet_getters.go:284] Path "/tmp/kubelet_test.911631435/pods/pod1uid/volumes" does not exist
W0315 16:24:40.723455      15 kubelet_getters.go:284] Path "/tmp/kubelet_test.911631435/pods/pod1uid/volumes" does not exist
E0315 16:24:40.723528      15 kubelet_volumes.go:154] Orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W0315 16:24:40.729656      15 kubelet_getters.go:284] Path "/tmp/kubelet_test.042310800/pods/pod1uid/volumes" does not exist
W0315 16:24:40.729798      15 kubelet_getters.go:284] Path "/tmp/kubelet_test.042310800/pods/pod1uid/volumes" does not exist
E0315 16:24:40.733585      15 kubelet_volumes.go:154] Orphaned pod "pod1uid" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W0315 16:24:40.735483      15 kubelet_getters.go:284] Path "/tmp/kubelet_test.612096185/pods/pod1uid/volumes" does not exist
W0315 16:24:40.735590      15 kubelet_getters.go:284] Path "/tmp/kubelet_test.612096185/pods/pod1uid/volumes" does not exist
E0315 16:24:40.735716      15 kubelet_volumes.go:154] Orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
I0315 16:24:40.737812      15 volume_manager.go:248] Starting Kubelet Volume Manager
I0315 16:24:40.737853      15 desired_state_of_world_populator.go:130] Desired state populator starts to run
I0315 16:24:40.839809      15 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I0315 16:24:40.839920      15 operation_generator.go:377] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0315 16:24:40.839963      15 reconciler.go:154] Reconciler: start to sync state
I0315 16:24:40.941148      15 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I0315 16:24:40.941191      15 operation_generator.go:589] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0315 16:24:40.941394      15 operation_generator.go:377] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0315 16:24:40.941479      15 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0315 16:24:40.941579      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") device mount path ""
I0315 16:24:41.042284      15 operation_generator.go:589] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0315 16:24:41.042467      15 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0315 16:24:41.042591      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") device mount path ""
I0315 16:24:41.338390      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
I0315 16:24:41.341107      15 volume_manager.go:248] Starting Kubelet Volume Manager
I0315 16:24:41.341119      15 desired_state_of_world_populator.go:130] Desired state populator starts to run
I0315 16:24:41.542483      15 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I0315 16:24:41.542803      15 operation_generator.go:377] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0315 16:24:41.542936      15 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I0315 16:24:41.543299      15 operation_generator.go:377] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0315 16:24:41.543517      15 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I0315 16:24:41.543619      15 operation_generator.go:377] AttachVolume.Attach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") from node "127.0.0.1" 
I0315 16:24:41.543634      15 reconciler.go:154] Reconciler: start to sync state
I0315 16:24:41.645138      15 operation_generator.go:589] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") DevicePath "/dev/vdb-test"
I0315 16:24:41.645302      15 operation_generator.go:589] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") DevicePath "/dev/vdb-test"
I0315 16:24:41.645458      15 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") DevicePath "/dev/sdb"
I0315 16:24:41.645608      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") device mount path ""
I0315 16:24:41.645630      15 operation_generator.go:589] MountVolume.WaitForAttach entering for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") DevicePath "/dev/vdb-test"
I0315 16:24:41.645459      15 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") DevicePath "/dev/sdb"
I0315 16:24:41.645843      15 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") DevicePath "/dev/sdb"
I0315 16:24:41.645895      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") device mount path ""
I0315 16:24:41.645971      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") device mount path ""
I0315 16:24:41.942102      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
I0315 16:24:41.944478      15 volume_manager.go:248] Starting Kubelet Volume Manager
I0315 16:24:41.944567      15 desired_state_of_world_populator.go:130] Desired state populator starts to run
I0315 16:24:42.145802      15 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0315 16:24:42.145929      15 reconciler.go:154] Reconciler: start to sync state
I0315 16:24:42.146028      15 operation_generator.go:377] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0315 16:24:42.247075      15 operation_generator.go:589] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0315 16:24:42.247228      15 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0315 16:24:42.247352      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0315 16:24:42.545402      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
I0315 16:24:42.548352      15 volume_manager.go:248] Starting Kubelet Volume Manager
I0315 16:24:42.548499      15 desired_state_of_world_populator.go:130] Desired state populator starts to run
I0315 16:24:42.749846      15 reconciler.go:227] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0315 16:24:42.749970      15 reconciler.go:154] Reconciler: start to sync state
I0315 16:24:42.750591      15 operation_generator.go:377] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0315 16:24:42.851001      15 operation_generator.go:589] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0315 16:24:42.851149      15 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0315 16:24:42.851290      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
==================
WARNING: DATA RACE
Write at 0x00c0006c5dd0 by goroutine 10:
  runtime.mapassign()
      GOROOT/src/runtime/map.go:576 +0x0
  k8s.io/kubernetes/pkg/kubelet/volumemanager/cache.(*actualStateOfWorld).RecordVolumeUnmount()
      pkg/kubelet/volumemanager/cache/actual_state_of_world.go:444 +0x333
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).findAndRemoveDeletedPods()
      pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:270 +0x75e
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).populatorLoop()
      pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:170 +0x2b8
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).populatorLoop-fm()
      pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:153 +0x41
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x61
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0x108
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).Run()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x1af

Previous read at 0x00c0006c5dd0 by goroutine 20:
  k8s.io/kubernetes/pkg/kubelet/volumemanager/cache.(*actualStateOfWorld).GetUnmountedVolumes()
      pkg/kubelet/volumemanager/cache/actual_state_of_world.go:761 +0x297
  k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).reconcile()
      pkg/kubelet/volumemanager/reconciler/reconciler.go:277 +0x44a
  k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).reconciliationLoopFunc.func1()
      pkg/kubelet/volumemanager/reconciler/reconciler.go:148 +0x4a
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x61
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0x108
  k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).Run()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x8f

Goroutine 10 (running) created at:
  k8s.io/kubernetes/pkg/kubelet/volumemanager.(*volumeManager).Run()
      pkg/kubelet/volumemanager/volume_manager.go:245 +0xbd

Goroutine 20 (running) created at:
  k8s.io/kubernetes/pkg/kubelet/volumemanager.(*volumeManager).Run()
      pkg/kubelet/volumemanager/volume_manager.go:249 +0x15c
==================
==================
WARNING: DATA RACE
Write at 0x00c00039cbd0 by goroutine 10:
  k8s.io/kubernetes/pkg/kubelet/volumemanager/cache.(*actualStateOfWorld).RecordVolumeUnmount()
      pkg/kubelet/volumemanager/cache/actual_state_of_world.go:444 +0x351
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).findAndRemoveDeletedPods()
      pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:270 +0x75e
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).populatorLoop()
      pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:170 +0x2b8
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).populatorLoop-fm()
      pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:153 +0x41
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x61
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0x108
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).Run()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x1af

Previous read at 0x00c00039cbd0 by goroutine 20:
  k8s.io/kubernetes/pkg/kubelet/volumemanager/cache.(*actualStateOfWorld).PodExistsInVolume()
      pkg/kubelet/volumemanager/cache/actual_state_of_world.go:657 +0x240
  k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).reconcile()
      pkg/kubelet/volumemanager/reconciler/reconciler.go:188 +0x2f9
  k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).reconciliationLoopFunc.func1()
      pkg/kubelet/volumemanager/reconciler/reconciler.go:148 +0x4a
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x61
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0x108
  k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).Run()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x8f

Goroutine 10 (running) created at:
  k8s.io/kubernetes/pkg/kubelet/volumemanager.(*volumeManager).Run()
      pkg/kubelet/volumemanager/volume_manager.go:245 +0xbd

Goroutine 20 (running) created at:
  k8s.io/kubernetes/pkg/kubelet/volumemanager.(*volumeManager).Run()
      pkg/kubelet/volumemanager/volume_manager.go:249 +0x15c
==================
I0315 16:24:43.152853      15 reconciler.go:181] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I0315 16:24:43.153112      15 operation_generator.go:817] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I0315 16:24:43.253719      15 reconciler.go:294] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:43.253960      15 operation_generator.go:917] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
==================
WARNING: DATA RACE
Write at 0x00c0018a6d80 by goroutine 25:
  runtime.mapassign_faststr()
      GOROOT/src/runtime/map_faststr.go:202 +0x0
  k8s.io/kubernetes/pkg/kubelet/volumemanager/cache.(*actualStateOfWorld).SetVolumeGloballyMounted()
      pkg/kubelet/volumemanager/cache/actual_state_of_world.go:602 +0x24a
  k8s.io/kubernetes/pkg/kubelet/volumemanager/cache.(*actualStateOfWorld).MarkDeviceAsUnmounted()
      pkg/kubelet/volumemanager/cache/actual_state_of_world.go:373 +0x65
  k8s.io/kubernetes/pkg/volume/util/operationexecutor.(*operationGenerator).GenerateUnmountDeviceFunc.func1()
      pkg/volume/util/operationexecutor/operation_generator.go:920 +0x75d
  k8s.io/kubernetes/pkg/volume/util/nestedpendingoperations.(*nestedPendingOperations).Run.func1()
      pkg/volume/util/nestedpendingoperations/nestedpendingoperations.go:143 +0x15f

Previous read at 0x00c0018a6d80 by goroutine 10:
  runtime.mapaccess2_faststr()
      GOROOT/src/runtime/map_faststr.go:107 +0x0
  k8s.io/kubernetes/pkg/kubelet/volumemanager/cache.(*actualStateOfWorld).RecordVolumeUnmount()
      pkg/kubelet/volumemanager/cache/actual_state_of_world.go:435 +0xc7
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).findAndRemoveDeletedPods()
      pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:270 +0x75e
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).populatorLoop()
      pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:170 +0x2b8
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).populatorLoop-fm()
      pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:153 +0x41
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x61
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0x108
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).Run()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x1af

Goroutine 25 (running) created at:
  k8s.io/kubernetes/pkg/volume/util/nestedpendingoperations.(*nestedPendingOperations).Run()
      pkg/volume/util/nestedpendingoperations/nestedpendingoperations.go:130 +0x412
  k8s.io/kubernetes/pkg/volume/util/operationexecutor.(*operationExecutor).UnmountDevice()
      pkg/volume/util/operationexecutor/operation_executor.go:838 +0x221
  k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).reconcile()
      pkg/kubelet/volumemanager/reconciler/reconciler.go:284 +0x789
  k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).reconciliationLoopFunc.func1()
      pkg/kubelet/volumemanager/reconciler/reconciler.go:148 +0x4a
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x61
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0x108
  k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).Run()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x8f

Goroutine 10 (running) created at:
  k8s.io/kubernetes/pkg/kubelet/volumemanager.(*volumeManager).Run()
      pkg/kubelet/volumemanager/volume_manager.go:245 +0xbd
==================
==================
WARNING: DATA RACE
Write at 0x00c0003a2408 by goroutine 25:
  k8s.io/kubernetes/pkg/kubelet/volumemanager/cache.(*actualStateOfWorld).SetVolumeGloballyMounted()
      pkg/kubelet/volumemanager/cache/actual_state_of_world.go:602 +0x268
  k8s.io/kubernetes/pkg/kubelet/volumemanager/cache.(*actualStateOfWorld).MarkDeviceAsUnmounted()
      pkg/kubelet/volumemanager/cache/actual_state_of_world.go:373 +0x65
  k8s.io/kubernetes/pkg/volume/util/operationexecutor.(*operationGenerator).GenerateUnmountDeviceFunc.func1()
      pkg/volume/util/operationexecutor/operation_generator.go:920 +0x75d
  k8s.io/kubernetes/pkg/volume/util/nestedpendingoperations.(*nestedPendingOperations).Run.func1()
      pkg/volume/util/nestedpendingoperations/nestedpendingoperations.go:143 +0x15f

Previous read at 0x00c0003a2408 by goroutine 10:
  k8s.io/kubernetes/pkg/kubelet/volumemanager/cache.(*actualStateOfWorld).RecordVolumeUnmount()
      pkg/kubelet/volumemanager/cache/actual_state_of_world.go:435 +0xef
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).findAndRemoveDeletedPods()
      pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:270 +0x75e
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).populatorLoop()
      pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:170 +0x2b8
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).populatorLoop-fm()
      pkg/kubelet/volumemanager/populator/desired_state_of_world_populator.go:153 +0x41
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x61
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0x108
  k8s.io/kubernetes/pkg/kubelet/volumemanager/populator.(*desiredStateOfWorldPopulator).Run()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x1af

Goroutine 25 (running) created at:
  k8s.io/kubernetes/pkg/volume/util/nestedpendingoperations.(*nestedPendingOperations).Run()
      pkg/volume/util/nestedpendingoperations/nestedpendingoperations.go:130 +0x412
  k8s.io/kubernetes/pkg/volume/util/operationexecutor.(*operationExecutor).UnmountDevice()
      pkg/volume/util/operationexecutor/operation_executor.go:838 +0x221
  k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).reconcile()
      pkg/kubelet/volumemanager/reconciler/reconciler.go:284 +0x789
  k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).reconciliationLoopFunc.func1()
      pkg/kubelet/volumemanager/reconciler/reconciler.go:148 +0x4a
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x61
  k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0x108
  k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).Run()
      staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x8f

Goroutine 10 (running) created at:
  k8s.io/kubernetes/pkg/kubelet/volumemanager.(*volumeManager).Run()
      pkg/kubelet/volumemanager/volume_manager.go:245 +0xbd
==================
I0315 16:24:43.354750      15 reconciler.go:315] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:43.354989      15 operation_generator.go:500] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:43.401621      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
--- FAIL: TestVolumeUnmountAndDetachControllerDisabled (0.86s)
    testing.go:809: race detected during execution of test
I0315 16:24:43.405929      15 volume_manager.go:248] Starting Kubelet Volume Manager
I0315 16:24:43.406085      15 desired_state_of_world_populator.go:130] Desired state populator starts to run
I0315 16:24:43.607320      15 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0315 16:24:43.607489      15 reconciler.go:154] Reconciler: start to sync state
I0315 16:24:43.607599      15 operation_generator.go:1341] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0315 16:24:43.708731      15 operation_generator.go:589] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0315 16:24:44.216856      15 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0315 16:24:44.216983      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0315 16:24:44.306539      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
I0315 16:24:44.309816      15 volume_manager.go:248] Starting Kubelet Volume Manager
I0315 16:24:44.309994      15 desired_state_of_world_populator.go:130] Desired state populator starts to run
I0315 16:24:44.411844      15 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0315 16:24:44.411947      15 reconciler.go:154] Reconciler: start to sync state
E0315 16:24:44.412264      15 nestedpendingoperations.go:267] Operation for "\"fake/fake-device\"" failed. No retries permitted until 2019-03-15 16:24:44.912098897 +0000 UTC m=+10.196182240 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"vol1\" (UniqueName: \"fake/fake-device\") pod \"foo\" (UID: \"12345678\") "
I0315 16:24:44.914531      15 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0315 16:24:44.914946      15 operation_generator.go:1341] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0315 16:24:45.015935      15 operation_generator.go:589] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0315 16:24:45.016075      15 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0315 16:24:45.016278      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0315 16:24:45.022135      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0315 16:24:45.022630      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:45.035763      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0315 16:24:45.036236      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:45.046527      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
E0315 16:24:45.047004      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:45.217180      15 reconciler.go:181] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I0315 16:24:45.217328      15 operation_generator.go:817] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I0315 16:24:45.318113      15 reconciler.go:294] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:45.318428      15 operation_generator.go:917] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0315 16:24:45.418859      15 reconciler.go:301] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
I0315 16:24:45.462132      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
W0315 16:24:45.463998      15 pod_container_deletor.go:75] Container "abc" not found in pod's containers
I0315 16:24:45.591845      15 runonce.go:88] Waiting for 1 pods
I0315 16:24:45.592032      15 runonce.go:123] pod "foo_new(12345678)" containers running
I0315 16:24:45.592210      15 runonce.go:102] started pod "foo_new(12345678)"
I0315 16:24:45.592268      15 runonce.go:108] 1 pods started
FAIL

				from junit_bazel.xml

Find failedpod_foo(4) mentions in log files | View test history on testgrid


Show 819 Passed Tests

Error lines from build-log.txt

... skipping 154 lines ...
W0315 16:20:44.289] [7,370 / 9,130] 3 / 820 tests; GoLink cmd/kube-apiserver/app/options/linux_amd64_race_stripped/go_default_test; 14s linux-sandbox ... (8 actions, 7 running)
W0315 16:21:15.782] [7,557 / 9,155] 3 / 820 tests; GoLink cmd/kube-scheduler/app/options/linux_amd64_race_stripped/go_default_test; 14s linux-sandbox ... (8 actions running)
W0315 16:21:51.826] [7,701 / 9,179] 3 / 820 tests; GoLink cmd/kube-controller-manager/app/linux_amd64_race_stripped/go_default_test; 32s linux-sandbox ... (8 actions running)
W0315 16:22:32.954] [7,911 / 9,215] 7 / 820 tests; GoLink cmd/genkubedocs/linux_amd64_race_stripped/go_default_test; 11s linux-sandbox ... (6 actions, 5 running)
W0315 16:23:20.253] [9,742 / 9,759] 586 / 820 tests; GoLink pkg/kubelet/volumemanager/linux_amd64_race_stripped/go_default_test; 9s linux-sandbox ... (8 actions running)
W0315 16:24:18.067] [10,827 / 10,830] 817 / 820 tests; Testing //pkg/volume/util/operationexecutor:go_default_test; 12s linux-sandbox ... (3 actions, 2 running)
W0315 16:24:24.087] FAIL: //pkg/kubelet:go_default_test (see /bazel-scratch/.cache/bazel/_bazel_root/e9f728bbd90b3fba632eb31b20e1dacd/execroot/__main__/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_1.log)
W0315 16:24:34.664] FAIL: //pkg/kubelet:go_default_test (see /bazel-scratch/.cache/bazel/_bazel_root/e9f728bbd90b3fba632eb31b20e1dacd/execroot/__main__/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_2.log)
W0315 16:24:46.125] 
W0315 16:24:46.126] FAILED: //pkg/kubelet:go_default_test (Summary)
W0315 16:24:46.126]       /bazel-scratch/.cache/bazel/_bazel_root/e9f728bbd90b3fba632eb31b20e1dacd/execroot/__main__/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_1.log
W0315 16:24:46.126]       /bazel-scratch/.cache/bazel/_bazel_root/e9f728bbd90b3fba632eb31b20e1dacd/execroot/__main__/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_2.log
W0315 16:24:46.126]       /bazel-scratch/.cache/bazel/_bazel_root/e9f728bbd90b3fba632eb31b20e1dacd/execroot/__main__/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log
W0315 16:24:46.127] FAIL: //pkg/kubelet:go_default_test (see /bazel-scratch/.cache/bazel/_bazel_root/e9f728bbd90b3fba632eb31b20e1dacd/execroot/__main__/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log)
W0315 16:24:46.127] INFO: From Testing //pkg/kubelet:go_default_test:
I0315 16:24:46.228] ==================== Test output for //pkg/kubelet:go_default_test:
I0315 16:24:46.228] I0315 16:24:13.694695      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.229] I0315 16:24:13.695377      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.229] E0315 16:24:13.695726      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.229] I0315 16:24:13.708435      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.229] I0315 16:24:13.708969      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.230] E0315 16:24:13.709225      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.230] I0315 16:24:13.720217      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.230] I0315 16:24:13.720800      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.230] E0315 16:24:13.721046      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.231] E0315 16:24:14.734290      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:38825/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.231] E0315 16:24:15.735316      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:38825/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.231] E0315 16:24:16.736300      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:38825/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.231] E0315 16:24:17.737468      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:38825/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.231] E0315 16:24:18.738590      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:38825/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.232] I0315 16:24:18.741956      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.232] I0315 16:24:18.742702      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.232] E0315 16:24:18.743230      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.232] I0315 16:24:18.752810      15 setters.go:521] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-15 16:23:48.752607045 +0000 UTC m=-24.647539555 LastTransitionTime:2019-03-15 16:23:48.752607045 +0000 UTC m=-24.647539555 Reason:KubeletNotReady Message:container runtime is down.}
I0315 16:24:46.232] E0315 16:24:18.758191      15 kubelet.go:2157] Container runtime sanity check failed: injected runtime status error
I0315 16:24:46.233] E0315 16:24:18.764317      15 kubelet.go:2161] Container runtime status is nil
I0315 16:24:46.233] E0315 16:24:18.770605      15 kubelet.go:2170] Container runtime network not ready: <nil>
I0315 16:24:46.233] E0315 16:24:18.770720      15 kubelet.go:2182] Container runtime not ready: <nil>
I0315 16:24:46.233] E0315 16:24:18.776335      15 kubelet.go:2182] Container runtime not ready: RuntimeReady=false reason: message:
I0315 16:24:46.233] E0315 16:24:18.787142      15 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason: message:
I0315 16:24:46.234] I0315 16:24:18.787504      15 setters.go:521] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-15 16:24:18.758178911 +0000 UTC m=+5.358032350 LastTransitionTime:2019-03-15 16:24:18.758178911 +0000 UTC m=+5.358032350 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
I0315 16:24:46.234] E0315 16:24:18.795393      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.234] E0315 16:24:18.795492      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.234] E0315 16:24:18.795554      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.234] E0315 16:24:18.795609      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.235] E0315 16:24:18.795673      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.235] I0315 16:24:18.798314      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.235] I0315 16:24:18.798796      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.235] E0315 16:24:18.799155      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.235] I0315 16:24:18.824679      15 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24
I0315 16:24:46.236] I0315 16:24:18.970254      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.236] I0315 16:24:18.970619      15 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0315 16:24:46.236] I0315 16:24:18.970835      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.236] I0315 16:24:18.970964      15 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
I0315 16:24:46.236] I0315 16:24:18.975031      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.236] I0315 16:24:18.976043      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.237] I0315 16:24:18.976104      15 kubelet_node_status.go:199] Controller attach-detach setting changed to false; updating existing Node
I0315 16:24:46.237] I0315 16:24:18.981006      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.237] I0315 16:24:18.981067      15 kubelet_node_status.go:202] Controller attach-detach setting changed to true; updating existing Node
I0315 16:24:46.237] E0315 16:24:18.983948      15 kubelet_node_status.go:94] Unable to register node "127.0.0.1" with API server: 
I0315 16:24:46.237] E0315 16:24:18.984649      15 kubelet_node_status.go:100] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I0315 16:24:46.237] I0315 16:24:18.985333      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.237] I0315 16:24:18.985383      15 kubelet_node_status.go:199] Controller attach-detach setting changed to false; updating existing Node
I0315 16:24:46.238] E0315 16:24:18.986184      15 kubelet_node_status.go:124] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
I0315 16:24:46.238] I0315 16:24:18.990336      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.238] I0315 16:24:18.990879      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.238] E0315 16:24:18.991246      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.238] I0315 16:24:19.000492      15 kubelet_node_status.go:137] Zero out resource test.com/resource1 capacity in existing node.
I0315 16:24:46.238] I0315 16:24:19.000924      15 kubelet_node_status.go:137] Zero out resource test.com/resource2 capacity in existing node.
I0315 16:24:46.239] I0315 16:24:19.103607      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.239] I0315 16:24:19.103960      15 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0315 16:24:46.239] I0315 16:24:19.104097      15 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
I0315 16:24:46.239] I0315 16:24:19.205292      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
... skipping 14 lines ...
I0315 16:24:46.242] I0315 16:24:19.293609      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0315 16:24:46.242] I0315 16:24:19.293992      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0315 16:24:46.242] I0315 16:24:19.294250      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0315 16:24:46.242] E0315 16:24:19.295803      15 kubelet.go:1879] Update channel is closed. Exiting the sync loop.
I0315 16:24:46.242] I0315 16:24:19.295867      15 kubelet.go:1806] Starting kubelet main sync loop.
I0315 16:24:46.242] E0315 16:24:19.295951      15 kubelet.go:1879] Update channel is closed. Exiting the sync loop.
I0315 16:24:46.242] W0315 16:24:19.312459      15 predicate.go:74] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
I0315 16:24:46.243] E0315 16:24:19.319336      15 runtime.go:195] invalid container ID: ""
I0315 16:24:46.243] E0315 16:24:19.319465      15 runtime.go:195] invalid container ID: ""
I0315 16:24:46.243] W0315 16:24:19.325492      15 kubelet.go:1624] Deleting mirror pod "foo_ns(11111111)" because it is outdated
I0315 16:24:46.243] W0315 16:24:19.381491      15 kubelet_getters.go:284] Path "/tmp/kubelet_test.758787494/pods/pod1uid/volumes" does not exist
I0315 16:24:46.243] W0315 16:24:19.381615      15 kubelet_getters.go:284] Path "/tmp/kubelet_test.758787494/pods/pod1uid/volumes" does not exist
I0315 16:24:46.243] E0315 16:24:19.385355      15 kubelet_volumes.go:154] Orphaned pod "pod1uid" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
... skipping 323 lines ...
I0315 16:24:46.305]   k8s.io/kubernetes/pkg/kubelet/volumemanager.(*volumeManager).Run()
I0315 16:24:46.305]       pkg/kubelet/volumemanager/volume_manager.go:245 +0xbd
I0315 16:24:46.305] ==================
I0315 16:24:46.306] I0315 16:24:22.013544      15 reconciler.go:315] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:46.306] I0315 16:24:22.013745      15 operation_generator.go:500] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:46.306] I0315 16:24:22.058902      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
I0315 16:24:46.306] --- FAIL: TestVolumeUnmountAndDetachControllerDisabled (0.85s)
I0315 16:24:46.306]     testing.go:809: race detected during execution of test
I0315 16:24:46.306] I0315 16:24:22.061142      15 volume_manager.go:248] Starting Kubelet Volume Manager
I0315 16:24:46.307] I0315 16:24:22.061189      15 desired_state_of_world_populator.go:130] Desired state populator starts to run
I0315 16:24:46.307] I0315 16:24:22.162420      15 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0315 16:24:46.307] I0315 16:24:22.162553      15 reconciler.go:154] Reconciler: start to sync state
I0315 16:24:46.307] E0315 16:24:22.162858      15 nestedpendingoperations.go:267] Operation for "\"fake/fake-device\"" failed. No retries permitted until 2019-03-15 16:24:22.662625176 +0000 UTC m=+9.262478621 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"vol1\" (UniqueName: \"fake/fake-device\") pod \"foo\" (UID: \"12345678\") "
I0315 16:24:46.308] I0315 16:24:22.664926      15 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0315 16:24:46.308] I0315 16:24:22.665252      15 operation_generator.go:1341] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0315 16:24:46.308] I0315 16:24:22.765764      15 operation_generator.go:589] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0315 16:24:46.309] I0315 16:24:22.765919      15 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0315 16:24:46.309] I0315 16:24:22.765992      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0315 16:24:46.309] I0315 16:24:22.962205      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
... skipping 7 lines ...
I0315 16:24:46.311] I0315 16:24:23.267435      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0315 16:24:46.311] I0315 16:24:23.568916      15 reconciler.go:181] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I0315 16:24:46.311] I0315 16:24:23.569203      15 operation_generator.go:817] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I0315 16:24:46.311] I0315 16:24:23.669833      15 reconciler.go:294] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:46.312] I0315 16:24:23.670095      15 operation_generator.go:917] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0315 16:24:46.312] I0315 16:24:23.696106      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.312] E0315 16:24:23.696526      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.312] I0315 16:24:23.709633      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.312] E0315 16:24:23.710084      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.312] I0315 16:24:23.721469      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.313] E0315 16:24:23.722184      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.313] I0315 16:24:23.770476      15 reconciler.go:301] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
I0315 16:24:46.313] I0315 16:24:23.816771      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
I0315 16:24:46.313] W0315 16:24:23.819586      15 pod_container_deletor.go:75] Container "abc" not found in pod's containers
I0315 16:24:46.313] I0315 16:24:23.996875      15 runonce.go:88] Waiting for 1 pods
I0315 16:24:46.313] I0315 16:24:23.996983      15 runonce.go:123] pod "foo_new(12345678)" containers running
I0315 16:24:46.313] I0315 16:24:23.997173      15 runonce.go:102] started pod "foo_new(12345678)"
I0315 16:24:46.313] I0315 16:24:23.997266      15 runonce.go:108] 1 pods started
I0315 16:24:46.313] FAIL
I0315 16:24:46.314] ================================================================================
I0315 16:24:46.314] ==================== Test output for //pkg/kubelet:go_default_test:
I0315 16:24:46.314] I0315 16:24:24.431601      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.314] I0315 16:24:24.432499      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.314] E0315 16:24:24.432879      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.314] I0315 16:24:24.443804      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.314] I0315 16:24:24.444475      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.315] E0315 16:24:24.444847      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.315] I0315 16:24:24.453790      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.315] I0315 16:24:24.454645      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.315] E0315 16:24:24.455025      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.316] E0315 16:24:25.469247      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:44109/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.316] E0315 16:24:26.470528      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:44109/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.316] E0315 16:24:27.471868      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:44109/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.316] E0315 16:24:28.473039      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:44109/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.317] E0315 16:24:29.474210      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:44109/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.317] I0315 16:24:29.476472      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.317] I0315 16:24:29.477060      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.317] E0315 16:24:29.477345      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.317] I0315 16:24:29.487718      15 setters.go:521] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-15 16:23:59.487483773 +0000 UTC m=-24.648908324 LastTransitionTime:2019-03-15 16:23:59.487483773 +0000 UTC m=-24.648908324 Reason:KubeletNotReady Message:container runtime is down.}
I0315 16:24:46.318] E0315 16:24:29.493122      15 kubelet.go:2157] Container runtime sanity check failed: injected runtime status error
I0315 16:24:46.318] E0315 16:24:29.498352      15 kubelet.go:2161] Container runtime status is nil
I0315 16:24:46.318] E0315 16:24:29.504254      15 kubelet.go:2170] Container runtime network not ready: <nil>
I0315 16:24:46.318] E0315 16:24:29.504367      15 kubelet.go:2182] Container runtime not ready: <nil>
I0315 16:24:46.318] E0315 16:24:29.509533      15 kubelet.go:2182] Container runtime not ready: RuntimeReady=false reason: message:
I0315 16:24:46.318] E0315 16:24:29.520486      15 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason: message:
I0315 16:24:46.318] I0315 16:24:29.520801      15 setters.go:521] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-15 16:24:29.493105926 +0000 UTC m=+5.356713807 LastTransitionTime:2019-03-15 16:24:29.493105926 +0000 UTC m=+5.356713807 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
I0315 16:24:46.319] E0315 16:24:29.528698      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.319] E0315 16:24:29.528808      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.319] E0315 16:24:29.528858      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.319] E0315 16:24:29.528934      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.319] E0315 16:24:29.528998      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.319] I0315 16:24:29.531456      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.320] I0315 16:24:29.531959      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.320] E0315 16:24:29.532256      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.320] I0315 16:24:29.560982      15 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24
I0315 16:24:46.320] I0315 16:24:29.697800      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.320] I0315 16:24:29.698256      15 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0315 16:24:46.320] I0315 16:24:29.698434      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.320] I0315 16:24:29.698546      15 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
I0315 16:24:46.321] I0315 16:24:29.701053      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.321] I0315 16:24:29.702351      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.321] I0315 16:24:29.702408      15 kubelet_node_status.go:199] Controller attach-detach setting changed to false; updating existing Node
I0315 16:24:46.321] I0315 16:24:29.706117      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.321] I0315 16:24:29.706187      15 kubelet_node_status.go:202] Controller attach-detach setting changed to true; updating existing Node
I0315 16:24:46.321] E0315 16:24:29.709322      15 kubelet_node_status.go:94] Unable to register node "127.0.0.1" with API server: 
I0315 16:24:46.321] E0315 16:24:29.710602      15 kubelet_node_status.go:100] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I0315 16:24:46.322] I0315 16:24:29.711686      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.322] I0315 16:24:29.711751      15 kubelet_node_status.go:199] Controller attach-detach setting changed to false; updating existing Node
I0315 16:24:46.322] E0315 16:24:29.712482      15 kubelet_node_status.go:124] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
I0315 16:24:46.322] I0315 16:24:29.717282      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.322] I0315 16:24:29.717982      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.322] E0315 16:24:29.718415      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.323] I0315 16:24:29.730062      15 kubelet_node_status.go:137] Zero out resource test.com/resource1 capacity in existing node.
I0315 16:24:46.323] I0315 16:24:29.730239      15 kubelet_node_status.go:137] Zero out resource test.com/resource2 capacity in existing node.
I0315 16:24:46.323] I0315 16:24:29.833707      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.323] I0315 16:24:29.834081      15 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0315 16:24:46.323] I0315 16:24:29.834239      15 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
I0315 16:24:46.323] I0315 16:24:29.935132      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
... skipping 14 lines ...
I0315 16:24:46.326] I0315 16:24:30.015191      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0315 16:24:46.326] I0315 16:24:30.015477      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0315 16:24:46.326] I0315 16:24:30.015761      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0315 16:24:46.327] E0315 16:24:30.018616      15 kubelet.go:1879] Update channel is closed. Exiting the sync loop.
I0315 16:24:46.327] I0315 16:24:30.018706      15 kubelet.go:1806] Starting kubelet main sync loop.
I0315 16:24:46.327] E0315 16:24:30.018779      15 kubelet.go:1879] Update channel is closed. Exiting the sync loop.
I0315 16:24:46.327] W0315 16:24:30.040563      15 predicate.go:74] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
I0315 16:24:46.327] E0315 16:24:30.045252      15 runtime.go:195] invalid container ID: ""
I0315 16:24:46.328] E0315 16:24:30.045505      15 runtime.go:195] invalid container ID: ""
I0315 16:24:46.328] W0315 16:24:30.052278      15 kubelet.go:1624] Deleting mirror pod "foo_ns(11111111)" because it is outdated
I0315 16:24:46.328] W0315 16:24:30.112527      15 kubelet_getters.go:284] Path "/tmp/kubelet_test.943948229/pods/pod1uid/volumes" does not exist
I0315 16:24:46.328] W0315 16:24:30.112687      15 kubelet_getters.go:284] Path "/tmp/kubelet_test.943948229/pods/pod1uid/volumes" does not exist
I0315 16:24:46.328] E0315 16:24:30.116731      15 kubelet_volumes.go:154] Orphaned pod "pod1uid" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
... skipping 239 lines ...
I0315 16:24:46.363]   k8s.io/kubernetes/pkg/kubelet/volumemanager.(*volumeManager).Run()
I0315 16:24:46.363]       pkg/kubelet/volumemanager/volume_manager.go:245 +0xbd
I0315 16:24:46.363] ==================
I0315 16:24:46.363] I0315 16:24:32.743226      15 reconciler.go:315] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:46.363] I0315 16:24:32.743391      15 operation_generator.go:500] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:46.363] I0315 16:24:32.789804      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
I0315 16:24:46.363] --- FAIL: TestVolumeUnmountAndDetachControllerDisabled (0.85s)
I0315 16:24:46.364]     testing.go:809: race detected during execution of test
I0315 16:24:46.364] I0315 16:24:32.792641      15 volume_manager.go:248] Starting Kubelet Volume Manager
I0315 16:24:46.364] I0315 16:24:32.792714      15 desired_state_of_world_populator.go:130] Desired state populator starts to run
I0315 16:24:46.364] I0315 16:24:32.993823      15 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0315 16:24:46.364] I0315 16:24:32.993940      15 reconciler.go:154] Reconciler: start to sync state
I0315 16:24:46.364] I0315 16:24:32.994136      15 operation_generator.go:1341] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
... skipping 14 lines ...
I0315 16:24:46.367] I0315 16:24:34.201217      15 reconciler.go:294] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:46.367] I0315 16:24:34.201571      15 operation_generator.go:917] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0315 16:24:46.367] I0315 16:24:34.301738      15 reconciler.go:301] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
I0315 16:24:46.368] I0315 16:24:34.399149      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
I0315 16:24:46.368] W0315 16:24:34.401079      15 pod_container_deletor.go:75] Container "abc" not found in pod's containers
I0315 16:24:46.368] I0315 16:24:34.433392      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.368] E0315 16:24:34.433899      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.368] I0315 16:24:34.445289      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.368] E0315 16:24:34.445752      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.369] I0315 16:24:34.455627      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.369] E0315 16:24:34.456120      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.369] I0315 16:24:34.579811      15 runonce.go:88] Waiting for 1 pods
I0315 16:24:46.369] I0315 16:24:34.579964      15 runonce.go:123] pod "foo_new(12345678)" containers running
I0315 16:24:46.369] I0315 16:24:34.580200      15 runonce.go:102] started pod "foo_new(12345678)"
I0315 16:24:46.369] I0315 16:24:34.580270      15 runonce.go:108] 1 pods started
I0315 16:24:46.369] FAIL
I0315 16:24:46.369] ================================================================================
I0315 16:24:46.370] ==================== Test output for //pkg/kubelet:go_default_test:
I0315 16:24:46.370] I0315 16:24:35.018962      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.370] I0315 16:24:35.020040      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.370] E0315 16:24:35.021655      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.371] I0315 16:24:35.034050      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.371] I0315 16:24:35.034985      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.371] E0315 16:24:35.035310      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.371] I0315 16:24:35.044981      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.371] I0315 16:24:35.045722      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.372] E0315 16:24:35.046003      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.372] E0315 16:24:36.058550      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40623/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.372] E0315 16:24:37.060198      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40623/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.372] E0315 16:24:38.061760      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40623/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.373] E0315 16:24:39.063024      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40623/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.373] E0315 16:24:40.064290      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40623/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 16:24:46.373] I0315 16:24:40.067937      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.374] I0315 16:24:40.068532      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.374] E0315 16:24:40.069036      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.374] I0315 16:24:40.080113      15 setters.go:521] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-15 16:24:10.079881863 +0000 UTC m=-24.636034802 LastTransitionTime:2019-03-15 16:24:10.079881863 +0000 UTC m=-24.636034802 Reason:KubeletNotReady Message:container runtime is down.}
I0315 16:24:46.374] E0315 16:24:40.088063      15 kubelet.go:2157] Container runtime sanity check failed: injected runtime status error
I0315 16:24:46.374] E0315 16:24:40.093610      15 kubelet.go:2161] Container runtime status is nil
I0315 16:24:46.375] E0315 16:24:40.099129      15 kubelet.go:2170] Container runtime network not ready: <nil>
I0315 16:24:46.375] E0315 16:24:40.099275      15 kubelet.go:2182] Container runtime not ready: <nil>
I0315 16:24:46.375] E0315 16:24:40.104569      15 kubelet.go:2182] Container runtime not ready: RuntimeReady=false reason: message:
I0315 16:24:46.375] E0315 16:24:40.114988      15 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason: message:
I0315 16:24:46.376] I0315 16:24:40.115305      15 setters.go:521] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-03-15 16:24:40.088044678 +0000 UTC m=+5.372128018 LastTransitionTime:2019-03-15 16:24:40.088044678 +0000 UTC m=+5.372128018 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
I0315 16:24:46.376] E0315 16:24:40.123113      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.376] E0315 16:24:40.123240      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.376] E0315 16:24:40.123298      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.377] E0315 16:24:40.123370      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.377] E0315 16:24:40.123419      15 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I0315 16:24:46.377] I0315 16:24:40.125389      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.377] I0315 16:24:40.126224      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.377] E0315 16:24:40.126718      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.377] I0315 16:24:40.149275      15 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24
I0315 16:24:46.378] I0315 16:24:40.290083      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.378] I0315 16:24:40.290592      15 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0315 16:24:46.378] I0315 16:24:40.290854      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.378] I0315 16:24:40.290935      15 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
I0315 16:24:46.378] I0315 16:24:40.293732      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.379] I0315 16:24:40.294428      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.379] I0315 16:24:40.294488      15 kubelet_node_status.go:199] Controller attach-detach setting changed to false; updating existing Node
I0315 16:24:46.379] I0315 16:24:40.298020      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.379] I0315 16:24:40.298079      15 kubelet_node_status.go:202] Controller attach-detach setting changed to true; updating existing Node
I0315 16:24:46.379] E0315 16:24:40.300731      15 kubelet_node_status.go:94] Unable to register node "127.0.0.1" with API server: 
I0315 16:24:46.379] E0315 16:24:40.301372      15 kubelet_node_status.go:100] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I0315 16:24:46.379] I0315 16:24:40.302950      15 kubelet_node_status.go:114] Node 127.0.0.1 was previously registered
I0315 16:24:46.380] I0315 16:24:40.303052      15 kubelet_node_status.go:199] Controller attach-detach setting changed to false; updating existing Node
I0315 16:24:46.380] E0315 16:24:40.305037      15 kubelet_node_status.go:124] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
I0315 16:24:46.380] I0315 16:24:40.310153      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.380] I0315 16:24:40.310719      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.380] E0315 16:24:40.311015      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.380] I0315 16:24:40.321287      15 kubelet_node_status.go:137] Zero out resource test.com/resource1 capacity in existing node.
I0315 16:24:46.381] I0315 16:24:40.321518      15 kubelet_node_status.go:137] Zero out resource test.com/resource2 capacity in existing node.
I0315 16:24:46.381] I0315 16:24:40.424943      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.381] I0315 16:24:40.425304      15 kubelet_node_status.go:72] Attempting to register node 127.0.0.1
I0315 16:24:46.381] I0315 16:24:40.425447      15 kubelet_node_status.go:75] Successfully registered node 127.0.0.1
I0315 16:24:46.381] I0315 16:24:40.526309      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
... skipping 14 lines ...
I0315 16:24:46.384] I0315 16:24:40.619654      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0315 16:24:46.384] I0315 16:24:40.619940      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0315 16:24:46.384] I0315 16:24:40.620214      15 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0315 16:24:46.384] E0315 16:24:40.622623      15 kubelet.go:1879] Update channel is closed. Exiting the sync loop.
I0315 16:24:46.384] I0315 16:24:40.622717      15 kubelet.go:1806] Starting kubelet main sync loop.
I0315 16:24:46.384] E0315 16:24:40.622821      15 kubelet.go:1879] Update channel is closed. Exiting the sync loop.
I0315 16:24:46.384] W0315 16:24:40.649438      15 predicate.go:74] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
I0315 16:24:46.385] E0315 16:24:40.653578      15 runtime.go:195] invalid container ID: ""
I0315 16:24:46.385] E0315 16:24:40.653724      15 runtime.go:195] invalid container ID: ""
I0315 16:24:46.385] W0315 16:24:40.660919      15 kubelet.go:1624] Deleting mirror pod "foo_ns(11111111)" because it is outdated
I0315 16:24:46.385] W0315 16:24:40.723298      15 kubelet_getters.go:284] Path "/tmp/kubelet_test.911631435/pods/pod1uid/volumes" does not exist
I0315 16:24:46.385] W0315 16:24:40.723455      15 kubelet_getters.go:284] Path "/tmp/kubelet_test.911631435/pods/pod1uid/volumes" does not exist
I0315 16:24:46.385] E0315 16:24:40.723528      15 kubelet_volumes.go:154] Orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
... skipping 239 lines ...
I0315 16:24:46.419]   k8s.io/kubernetes/pkg/kubelet/volumemanager.(*volumeManager).Run()
I0315 16:24:46.419]       pkg/kubelet/volumemanager/volume_manager.go:245 +0xbd
I0315 16:24:46.419] ==================
I0315 16:24:46.420] I0315 16:24:43.354750      15 reconciler.go:315] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:46.420] I0315 16:24:43.354989      15 operation_generator.go:500] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:46.420] I0315 16:24:43.401621      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
I0315 16:24:46.420] --- FAIL: TestVolumeUnmountAndDetachControllerDisabled (0.86s)
I0315 16:24:46.420]     testing.go:809: race detected during execution of test
I0315 16:24:46.420] I0315 16:24:43.405929      15 volume_manager.go:248] Starting Kubelet Volume Manager
I0315 16:24:46.421] I0315 16:24:43.406085      15 desired_state_of_world_populator.go:130] Desired state populator starts to run
I0315 16:24:46.421] I0315 16:24:43.607320      15 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0315 16:24:46.421] I0315 16:24:43.607489      15 reconciler.go:154] Reconciler: start to sync state
I0315 16:24:46.421] I0315 16:24:43.607599      15 operation_generator.go:1341] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
... skipping 2 lines ...
I0315 16:24:46.422] I0315 16:24:44.216983      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0315 16:24:46.422] I0315 16:24:44.306539      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
I0315 16:24:46.422] I0315 16:24:44.309816      15 volume_manager.go:248] Starting Kubelet Volume Manager
I0315 16:24:46.422] I0315 16:24:44.309994      15 desired_state_of_world_populator.go:130] Desired state populator starts to run
I0315 16:24:46.423] I0315 16:24:44.411844      15 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0315 16:24:46.423] I0315 16:24:44.411947      15 reconciler.go:154] Reconciler: start to sync state
I0315 16:24:46.423] E0315 16:24:44.412264      15 nestedpendingoperations.go:267] Operation for "\"fake/fake-device\"" failed. No retries permitted until 2019-03-15 16:24:44.912098897 +0000 UTC m=+10.196182240 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"vol1\" (UniqueName: \"fake/fake-device\") pod \"foo\" (UID: \"12345678\") "
I0315 16:24:46.424] I0315 16:24:44.914531      15 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0315 16:24:46.424] I0315 16:24:44.914946      15 operation_generator.go:1341] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0315 16:24:46.424] I0315 16:24:45.015935      15 operation_generator.go:589] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0315 16:24:46.424] I0315 16:24:45.016075      15 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0315 16:24:46.425] I0315 16:24:45.016278      15 operation_generator.go:625] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0315 16:24:46.425] I0315 16:24:45.022135      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.425] E0315 16:24:45.022630      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.425] I0315 16:24:45.035763      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.426] E0315 16:24:45.036236      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.426] I0315 16:24:45.046527      15 kubelet_node_status.go:286] Controller attach/detach is disabled for this node; Kubelet will attach and detach volumes
I0315 16:24:46.426] E0315 16:24:45.047004      15 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0315 16:24:46.426] I0315 16:24:45.217180      15 reconciler.go:181] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I0315 16:24:46.427] I0315 16:24:45.217328      15 operation_generator.go:817] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I0315 16:24:46.427] I0315 16:24:45.318113      15 reconciler.go:294] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0315 16:24:46.427] I0315 16:24:45.318428      15 operation_generator.go:917] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0315 16:24:46.428] I0315 16:24:45.418859      15 reconciler.go:301] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
I0315 16:24:46.428] I0315 16:24:45.462132      15 volume_manager.go:254] Shutting down Kubelet Volume Manager
I0315 16:24:46.428] W0315 16:24:45.463998      15 pod_container_deletor.go:75] Container "abc" not found in pod's containers
I0315 16:24:46.428] I0315 16:24:45.591845      15 runonce.go:88] Waiting for 1 pods
I0315 16:24:46.428] I0315 16:24:45.592032      15 runonce.go:123] pod "foo_new(12345678)" containers running
I0315 16:24:46.429] I0315 16:24:45.592210      15 runonce.go:102] started pod "foo_new(12345678)"
I0315 16:24:46.429] I0315 16:24:45.592268      15 runonce.go:108] 1 pods started
I0315 16:24:46.429] FAIL
I0315 16:24:46.429] ================================================================================
W0315 16:24:52.257] INFO: Elapsed time: 454.621s, Critical Path: 95.82s
W0315 16:24:52.258] INFO: 9134 processes: 8830 remote cache hit, 304 linux-sandbox.
W0315 16:24:52.258] INFO: Build completed, 1 test FAILED, 10830 total actions
I0315 16:24:52.358] //cluster:clientbin_test                                        (cached) PASSED in 0.0s
I0315 16:24:52.359] //cluster:common_test                                           (cached) PASSED in 0.0s
I0315 16:24:52.359] //cluster:kube-util_test                                        (cached) PASSED in 0.0s
I0315 16:24:52.359] //cluster/gce/cos:go_default_test                               (cached) PASSED in 0.0s
I0315 16:24:52.359] //cluster/gce/custom:go_default_test                            (cached) PASSED in 0.0s
I0315 16:24:52.359] //cluster/gce/gci:go_default_test                               (cached) PASSED in 0.0s
... skipping 807 lines ...
I0315 16:24:52.457] //pkg/kubelet/volumemanager:go_default_test                              PASSED in 6.3s
I0315 16:24:52.458] //pkg/kubelet/volumemanager/cache:go_default_test                        PASSED in 1.7s
I0315 16:24:52.458] //pkg/kubelet/volumemanager/metrics:go_default_test                      PASSED in 2.7s
I0315 16:24:52.458] //pkg/kubelet/volumemanager/populator:go_default_test                    PASSED in 2.3s
I0315 16:24:52.458] //pkg/kubelet/volumemanager/reconciler:go_default_test                   PASSED in 11.4s
I0315 16:24:52.458] //pkg/volume/util/operationexecutor:go_default_test                      PASSED in 46.8s
I0315 16:24:52.458] //pkg/kubelet:go_default_test                                            FAILED in 3 out of 3 in 11.6s
I0315 16:24:52.458]   Stats over 3 runs: max = 11.6s, min = 10.6s, avg = 11.2s, dev = 0.4s
I0315 16:24:52.459]   /bazel-scratch/.cache/bazel/_bazel_root/e9f728bbd90b3fba632eb31b20e1dacd/execroot/__main__/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_1.log
I0315 16:24:52.459]   /bazel-scratch/.cache/bazel/_bazel_root/e9f728bbd90b3fba632eb31b20e1dacd/execroot/__main__/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_2.log
I0315 16:24:52.459]   /bazel-scratch/.cache/bazel/_bazel_root/e9f728bbd90b3fba632eb31b20e1dacd/execroot/__main__/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log
I0315 16:24:52.460] 
I0315 16:24:52.460] Executed 14 out of 820 tests: 819 tests pass and 1 fails locally.
I0315 16:24:52.460] There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
W0315 16:24:52.561] INFO: Build completed, 1 test FAILED, 10830 total actions
W0315 16:24:54.586] Run: ('/workspace/./test-infra/jenkins/../scenarios/../hack/coalesce.py',)
I0315 16:25:09.612] Build passed, tests failed or timed out
E0315 16:25:09.618] Command failed
I0315 16:25:09.619] process 523 exited with code 3 after 8.7m
E0315 16:25:09.619] FAIL: pull-kubernetes-bazel-test
I0315 16:25:09.619] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0315 16:25:11.637] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0315 16:25:11.692] process 8712 exited with code 0 after 0.0m
I0315 16:25:11.693] Call:  gcloud config get-value account
I0315 16:25:12.003] process 8727 exited with code 0 after 0.0m
I0315 16:25:12.004] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0315 16:25:12.004] Upload result and artifacts...
I0315 16:25:12.004] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/75384/pull-kubernetes-bazel-test/79627
I0315 16:25:12.004] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/75384/pull-kubernetes-bazel-test/79627/artifacts
W0315 16:25:13.437] CommandException: One or more URLs matched no objects.
E0315 16:25:13.572] Command failed
I0315 16:25:13.573] process 8739 exited with code 1 after 0.0m
W0315 16:25:13.573] Remote dir gs://kubernetes-jenkins/pr-logs/pull/75384/pull-kubernetes-bazel-test/79627/artifacts not exist yet
I0315 16:25:13.573] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/75384/pull-kubernetes-bazel-test/79627/artifacts
I0315 16:25:16.888] process 8881 exited with code 0 after 0.1m
W0315 16:25:16.888] metadata path /workspace/_artifacts/metadata.json does not exist
W0315 16:25:16.888] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...