This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtedyu: Add tedyu to reviewers for pkg/controller
ResultFAILURE
Tests 1 failed / 940 succeeded
Started2020-05-22 15:42
Elapsed10m26s
Revisione63aae58f64e25af4a941a978a7755a2e48e2964
Refs 81057
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/90914876-cd41-4588-ac2b-a9bddc9993d2/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/90914876-cd41-4588-ac2b-a9bddc9993d2/targets/test

Test Failures


//pkg/kubelet:go_default_test 0.00s

bazel test //pkg/kubelet:go_default_test
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //pkg/kubelet:go_default_test
-----------------------------------------------------------------------------
=== RUN   TestActiveDeadlineHandler
--- PASS: TestActiveDeadlineHandler (0.00s)
=== RUN   TestKubeletDirs
--- PASS: TestKubeletDirs (0.00s)
=== RUN   TestGetIPTablesMark
--- PASS: TestGetIPTablesMark (0.00s)
=== RUN   TestUpdateNewNodeStatus
=== RUN   TestUpdateNewNodeStatus/5_image_limit
E0522 15:51:22.582170      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:22.583108      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
=== RUN   TestUpdateNewNodeStatus/no_image_limit
I0522 15:51:22.602666      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
E0522 15:51:22.603310      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
--- PASS: TestUpdateNewNodeStatus (0.03s)
    --- PASS: TestUpdateNewNodeStatus/5_image_limit (0.01s)
    --- PASS: TestUpdateNewNodeStatus/no_image_limit (0.01s)
=== RUN   TestUpdateExistingNodeStatus
E0522 15:51:22.610989      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:22.612058      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
--- PASS: TestUpdateExistingNodeStatus (0.01s)
=== RUN   TestUpdateExistingNodeStatusTimeout
E0522 15:51:23.630205      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:46647/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0522 15:51:24.631378      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:46647/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0522 15:51:25.632534      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:46647/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0522 15:51:26.633789      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:46647/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0522 15:51:27.634749      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:46647/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
--- PASS: TestUpdateExistingNodeStatusTimeout (5.01s)
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:516: accept tcp 127.0.0.1:46647: use of closed network connection
=== RUN   TestUpdateNodeStatusWithRuntimeStateError
E0522 15:51:27.638172      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:27.639099      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
I0522 15:51:27.654063      25 setters.go:555] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-05-22 15:50:57.653892989 +0000 UTC m=-24.604555141 LastTransitionTime:2020-05-22 15:50:57.653892989 +0000 UTC m=-24.604555141 Reason:KubeletNotReady Message:container runtime is down}
E0522 15:51:27.667033      25 kubelet.go:2097] Container runtime sanity check failed: injected runtime status error
E0522 15:51:27.673601      25 kubelet.go:2101] Container runtime status is nil
E0522 15:51:27.679748      25 kubelet.go:2110] Container runtime network not ready: <nil>
E0522 15:51:27.679862      25 kubelet.go:2121] Container runtime not ready: <nil>
E0522 15:51:27.686192      25 kubelet.go:2121] Container runtime not ready: RuntimeReady=false reason: message:
E0522 15:51:27.698727      25 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason: message:
I0522 15:51:27.699092      25 setters.go:555] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-05-22 15:51:27.667016766 +0000 UTC m=+5.408568647 LastTransitionTime:2020-05-22 15:51:27.667016766 +0000 UTC m=+5.408568647 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
--- PASS: TestUpdateNodeStatusWithRuntimeStateError (0.07s)
=== RUN   TestUpdateNodeStatusError
E0522 15:51:27.707180      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:51:27.707266      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:51:27.707403      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:51:27.707512      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:51:27.707637      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
--- PASS: TestUpdateNodeStatusError (0.00s)
=== RUN   TestUpdateNodeStatusWithLease
E0522 15:51:27.715436      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:27.716033      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
I0522 15:51:27.736732      25 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
--- PASS: TestUpdateNodeStatusWithLease (0.03s)
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease/no_volumes_and_no_update
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease/volumes_inuse_on_node_and_volumeManager
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease/volumes_inuse_on_node_but_not_in_volumeManager
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease/volumes_inuse_in_volumeManager_but_not_on_node
--- PASS: TestUpdateNodeStatusAndVolumesInUseWithNodeLease (0.01s)
    --- PASS: TestUpdateNodeStatusAndVolumesInUseWithNodeLease/no_volumes_and_no_update (0.00s)
    --- PASS: TestUpdateNodeStatusAndVolumesInUseWithNodeLease/volumes_inuse_on_node_and_volumeManager (0.00s)
    --- PASS: TestUpdateNodeStatusAndVolumesInUseWithNodeLease/volumes_inuse_on_node_but_not_in_volumeManager (0.00s)
    --- PASS: TestUpdateNodeStatusAndVolumesInUseWithNodeLease/volumes_inuse_in_volumeManager_but_not_on_node (0.00s)
=== RUN   TestRegisterWithApiServer
I0522 15:51:27.858119      25 kubelet_node_status.go:69] Attempting to register node 127.0.0.1
I0522 15:51:27.858661      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:27.858735      25 kubelet_node_status.go:72] Successfully registered node 127.0.0.1
--- PASS: TestRegisterWithApiServer (0.10s)
=== RUN   TestTryRegisterWithApiServer
I0522 15:51:27.865799      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:27.867440      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:27.867648      25 kubelet_node_status.go:201] Controller attach-detach setting changed to false; updating existing Node
I0522 15:51:27.873031      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:27.873265      25 kubelet_node_status.go:204] Controller attach-detach setting changed to true; updating existing Node
E0522 15:51:27.876690      25 kubelet_node_status.go:91] Unable to register node "127.0.0.1" with API server: 
E0522 15:51:27.877906      25 kubelet_node_status.go:97] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I0522 15:51:27.878928      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:27.878972      25 kubelet_node_status.go:201] Controller attach-detach setting changed to false; updating existing Node
E0522 15:51:27.879801      25 kubelet_node_status.go:121] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
--- PASS: TestTryRegisterWithApiServer (0.02s)
    kubelet_node_status_test.go:1302: actions: 3: [{ActionImpl:{Namespace: Verb:create Resource:/v1, Resource=nodes Subresource:} Name: Object:&Node{ObjectMeta:{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:127.0.0.1 kubernetes.io/os:linux] map[] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}} {ActionImpl:{Namespace: Verb:get Resource:/v1, Resource=nodes Subresource:} Name:127.0.0.1} {ActionImpl:{Namespace: Verb:patch Resource:/v1, Resource=nodes Subresource:status} Name:127.0.0.1 PatchType:application/strategic-merge-patch+json Patch:[123 34 109 101 116 97 100 97 116 97 34 58 123 34 97 110 110 111 116 97 116 105 111 110 115 34 58 110 117 108 108 125 125]}]
    kubelet_node_status_test.go:1302: actions: 3: [{ActionImpl:{Namespace: Verb:create Resource:/v1, Resource=nodes Subresource:} Name: Object:&Node{ObjectMeta:{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:127.0.0.1 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}} {ActionImpl:{Namespace: Verb:get Resource:/v1, Resource=nodes Subresource:} Name:127.0.0.1} {ActionImpl:{Namespace: Verb:patch Resource:/v1, Resource=nodes Subresource:status} Name:127.0.0.1 PatchType:application/strategic-merge-patch+json Patch:[123 34 109 101 116 97 100 97 116 97 34 58 123 34 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 34 116 114 117 101 34 125 125 125]}]
=== RUN   TestUpdateNewNodeStatusTooLargeReservation
E0522 15:51:27.884884      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:27.885807      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
--- PASS: TestUpdateNewNodeStatusTooLargeReservation (0.01s)
=== RUN   TestUpdateDefaultLabels
--- PASS: TestUpdateDefaultLabels (0.00s)
=== RUN   TestReconcileExtendedResource
I0522 15:51:27.896806      25 kubelet_node_status.go:136] Zero out resource test.com/resource1 capacity in existing node.
I0522 15:51:27.896987      25 kubelet_node_status.go:136] Zero out resource test.com/resource2 capacity in existing node.
--- PASS: TestReconcileExtendedResource (0.00s)
=== RUN   TestValidateNodeIPParam
--- PASS: TestValidateNodeIPParam (0.00s)
=== RUN   TestRegisterWithApiServerWithTaint
I0522 15:51:28.000706      25 kubelet_node_status.go:69] Attempting to register node 127.0.0.1
I0522 15:51:28.000855      25 kubelet_node_status.go:72] Successfully registered node 127.0.0.1
--- PASS: TestRegisterWithApiServerWithTaint (0.10s)
=== RUN   TestNodeStatusHasChanged
=== RUN   TestNodeStatusHasChanged/Node_status_does_not_change_with_nil_status.
=== RUN   TestNodeStatusHasChanged/Node_status_does_not_change_with_default_status.
=== RUN   TestNodeStatusHasChanged/Node_status_changes_with_nil_and_default_status.
=== RUN   TestNodeStatusHasChanged/Node_status_changes_with_nil_and_status.
=== RUN   TestNodeStatusHasChanged/Node_status_does_not_change_with_empty_conditions.
=== RUN   TestNodeStatusHasChanged/Node_status_does_not_change
=== RUN   TestNodeStatusHasChanged/Node_status_does_not_change_even_if_heartbeat_time_changes.
=== RUN   TestNodeStatusHasChanged/Node_status_does_not_change_even_if_the_orders_of_conditions_are_different.
=== RUN   TestNodeStatusHasChanged/Node_status_changes_if_condition_status_differs.
=== RUN   TestNodeStatusHasChanged/Node_status_changes_if_transition_time_changes.
=== RUN   TestNodeStatusHasChanged/Node_status_changes_with_different_number_of_conditions.
=== RUN   TestNodeStatusHasChanged/Node_status_changes_with_different_phase.
--- PASS: TestNodeStatusHasChanged (0.02s)
    --- PASS: TestNodeStatusHasChanged/Node_status_does_not_change_with_nil_status. (0.00s)
    --- PASS: TestNodeStatusHasChanged/Node_status_does_not_change_with_default_status. (0.00s)
    --- PASS: TestNodeStatusHasChanged/Node_status_changes_with_nil_and_default_status. (0.00s)
    --- PASS: TestNodeStatusHasChanged/Node_status_changes_with_nil_and_status. (0.00s)
    --- PASS: TestNodeStatusHasChanged/Node_status_does_not_change_with_empty_conditions. (0.00s)
    --- PASS: TestNodeStatusHasChanged/Node_status_does_not_change (0.00s)
    --- PASS: TestNodeStatusHasChanged/Node_status_does_not_change_even_if_heartbeat_time_changes. (0.00s)
    --- PASS: TestNodeStatusHasChanged/Node_status_does_not_change_even_if_the_orders_of_conditions_are_different. (0.00s)
    --- PASS: TestNodeStatusHasChanged/Node_status_changes_if_condition_status_differs. (0.00s)
    --- PASS: TestNodeStatusHasChanged/Node_status_changes_if_transition_time_changes. (0.00s)
    --- PASS: TestNodeStatusHasChanged/Node_status_changes_with_different_number_of_conditions. (0.00s)
    --- PASS: TestNodeStatusHasChanged/Node_status_changes_with_different_phase. (0.00s)
=== RUN   TestUpdateNodeAddresses
=== RUN   TestUpdateNodeAddresses/nil_to_populated
=== RUN   TestUpdateNodeAddresses/empty_to_populated
=== RUN   TestUpdateNodeAddresses/populated_to_nil
=== RUN   TestUpdateNodeAddresses/populated_to_empty
=== RUN   TestUpdateNodeAddresses/multiple_addresses_of_same_type,_no_change
=== RUN   TestUpdateNodeAddresses/1_InternalIP_to_2_InternalIP
=== RUN   TestUpdateNodeAddresses/2_InternalIP_to_1_InternalIP
=== RUN   TestUpdateNodeAddresses/2_InternalIP_to_2_different_InternalIP
=== RUN   TestUpdateNodeAddresses/2_InternalIP_to_reversed_order
--- PASS: TestUpdateNodeAddresses (0.04s)
    --- PASS: TestUpdateNodeAddresses/nil_to_populated (0.00s)
    --- PASS: TestUpdateNodeAddresses/empty_to_populated (0.00s)
    --- PASS: TestUpdateNodeAddresses/populated_to_nil (0.00s)
    --- PASS: TestUpdateNodeAddresses/populated_to_empty (0.00s)
    --- PASS: TestUpdateNodeAddresses/multiple_addresses_of_same_type,_no_change (0.00s)
    --- PASS: TestUpdateNodeAddresses/1_InternalIP_to_2_InternalIP (0.00s)
    --- PASS: TestUpdateNodeAddresses/2_InternalIP_to_1_InternalIP (0.00s)
    --- PASS: TestUpdateNodeAddresses/2_InternalIP_to_2_different_InternalIP (0.00s)
    --- PASS: TestUpdateNodeAddresses/2_InternalIP_to_reversed_order (0.00s)
=== RUN   TestMakeMounts
=== RUN   TestMakeMounts/valid_mounts_in_unprivileged_container
=== RUN   TestMakeMounts/valid_mounts_in_privileged_container
=== RUN   TestMakeMounts/invalid_absolute_SubPath
=== RUN   TestMakeMounts/invalid_SubPath_with_backsteps
=== RUN   TestMakeMounts/volume_doesn't_exist
E0522 15:51:28.067008      25 kubelet_pods.go:148] Mount cannot be satisfied for container "", because the volume is missing or the volume mounter is nil: {Name:disk ReadOnly:true MountPath:/mnt/path3 SubPath: MountPropagation:<nil> SubPathExpr:}
=== RUN   TestMakeMounts/volume_mounter_is_nil
E0522 15:51:28.067635      25 kubelet_pods.go:148] Mount cannot be satisfied for container "", because the volume is missing or the volume mounter is nil: {Name:disk ReadOnly:true MountPath:/mnt/path3 SubPath: MountPropagation:<nil> SubPathExpr:}
--- PASS: TestMakeMounts (0.00s)
    --- PASS: TestMakeMounts/valid_mounts_in_unprivileged_container (0.00s)
    --- PASS: TestMakeMounts/valid_mounts_in_privileged_container (0.00s)
    --- PASS: TestMakeMounts/invalid_absolute_SubPath (0.00s)
    --- PASS: TestMakeMounts/invalid_SubPath_with_backsteps (0.00s)
    --- PASS: TestMakeMounts/volume_doesn't_exist (0.00s)
    --- PASS: TestMakeMounts/volume_mounter_is_nil (0.00s)
=== RUN   TestMakeBlockVolumes
=== RUN   TestMakeBlockVolumes/invalid_absolute_Path
=== RUN   TestMakeBlockVolumes/volume_doesn't_exist
E0522 15:51:28.070830      25 kubelet_pods.go:109] Block volume cannot be satisfied for container "", because the volume is missing or the volume mapper is nil: {Name:disk DevicePath:/dev/sdaa}
=== RUN   TestMakeBlockVolumes/volume_BlockVolumeMapper_is_nil
E0522 15:51:28.071451      25 kubelet_pods.go:109] Block volume cannot be satisfied for container "", because the volume is missing or the volume mapper is nil: {Name:disk DevicePath:/dev/sdzz}
=== RUN   TestMakeBlockVolumes/valid_volumeDevices_in_container
--- PASS: TestMakeBlockVolumes (0.00s)
    --- PASS: TestMakeBlockVolumes/invalid_absolute_Path (0.00s)
    --- PASS: TestMakeBlockVolumes/volume_doesn't_exist (0.00s)
    --- PASS: TestMakeBlockVolumes/volume_BlockVolumeMapper_is_nil (0.00s)
    --- PASS: TestMakeBlockVolumes/valid_volumeDevices_in_container (0.00s)
W0522 15:51:28.073799      25 feature_gate.go:235] Setting GA feature gate VolumeSubpath=false. It will be removed in a future release.
=== RUN   TestDisabledSubpath
W0522 15:51:28.074566      25 feature_gate.go:235] Setting GA feature gate VolumeSubpath=true. It will be removed in a future release.
--- PASS: TestDisabledSubpath (0.00s)
=== RUN   TestNodeHostsFileContent
--- PASS: TestNodeHostsFileContent (0.00s)
=== RUN   TestManagedHostsFileContent
--- PASS: TestManagedHostsFileContent (0.00s)
=== RUN   TestRunInContainerNoSuchPod
--- PASS: TestRunInContainerNoSuchPod (0.01s)
=== RUN   TestRunInContainer
--- PASS: TestRunInContainer (0.00s)
=== RUN   TestMakeEnvironmentVariables
--- PASS: TestMakeEnvironmentVariables (0.04s)
=== RUN   TestPodPhaseWithRestartAlways
--- PASS: TestPodPhaseWithRestartAlways (0.00s)
=== RUN   TestPodPhaseWithRestartNever
--- PASS: TestPodPhaseWithRestartNever (0.00s)
=== RUN   TestPodPhaseWithRestartOnFailure
--- PASS: TestPodPhaseWithRestartOnFailure (0.00s)
=== RUN   TestGetExec
--- PASS: TestGetExec (0.00s)
=== RUN   TestGetPortForward
--- PASS: TestGetPortForward (0.00s)
=== RUN   TestHasHostMountPVC
W0522 15:51:28.148238      25 kubelet_pods.go:1817] unable to retrieve pvc : - foo
W0522 15:51:28.149046      25 kubelet_pods.go:1823] unable to retrieve pv foo - foo
--- PASS: TestHasHostMountPVC (0.01s)
=== RUN   TestHasNonNamespacedCapability
--- PASS: TestHasNonNamespacedCapability (0.00s)
=== RUN   TestHasHostVolume
E0522 15:51:28.154654      25 kubelet_pods.go:393] hostname for pod:"test-pod" was longer than 63. Truncated hostname to :"1234567.1234567.1234567.1234567.1234567.1234567.1234567.1234567"
E0522 15:51:28.154871      25 kubelet_pods.go:393] hostname for pod:"test-pod" was longer than 63. Truncated hostname to :"1234567.1234567.1234567.1234567.1234567.1234567.1234567.123456."
E0522 15:51:28.155139      25 kubelet_pods.go:393] hostname for pod:"test-pod" was longer than 63. Truncated hostname to :"1234567.1234567.1234567.1234567.1234567.1234567.1234567.123456-"
--- PASS: TestHasHostVolume (0.00s)
=== RUN   TestHasHostNamespace
--- PASS: TestHasHostNamespace (0.00s)
=== RUN   TestTruncatePodHostname
--- PASS: TestTruncatePodHostname (0.00s)
    kubelet_pods_test.go:2382: TestCase: "valid hostname"
    kubelet_pods_test.go:2382: TestCase: "too long hostname"
    kubelet_pods_test.go:2382: TestCase: "hostname end with ."
    kubelet_pods_test.go:2382: TestCase: "hostname end with -"
=== RUN   TestPodResourceLimitsDefaulting
I0522 15:51:28.162079      25 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0522 15:51:28.162666      25 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0522 15:51:28.163158      25 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I0522 15:51:28.163608      25 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
--- PASS: TestPodResourceLimitsDefaulting (0.01s)
=== RUN   TestSyncLoopAbort
E0522 15:51:28.166193      25 kubelet.go:1824] Update channel is closed. Exiting the sync loop.
I0522 15:51:28.166250      25 kubelet.go:1744] Starting kubelet main sync loop.
E0522 15:51:28.166313      25 kubelet.go:1824] Update channel is closed. Exiting the sync loop.
--- PASS: TestSyncLoopAbort (0.00s)
=== RUN   TestSyncPodsStartPod
--- PASS: TestSyncPodsStartPod (0.00s)
=== RUN   TestSyncPodsDeletesWhenSourcesAreReady
--- PASS: TestSyncPodsDeletesWhenSourcesAreReady (0.00s)
=== RUN   TestHandlePortConflicts
--- PASS: TestHandlePortConflicts (0.00s)
=== RUN   TestHandleHostNameConflicts
--- PASS: TestHandleHostNameConflicts (0.00s)
=== RUN   TestHandleNodeSelector
--- PASS: TestHandleNodeSelector (0.00s)
=== RUN   TestHandleMemExceeded
--- PASS: TestHandleMemExceeded (0.00s)
=== RUN   TestHandlePluginResources
W0522 15:51:28.188745      25 predicate.go:77] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
--- PASS: TestHandlePluginResources (0.01s)
=== RUN   TestPurgingObsoleteStatusMapEntries
--- PASS: TestPurgingObsoleteStatusMapEntries (0.00s)
=== RUN   TestValidateContainerLogStatus
E0522 15:51:28.201758      25 runtime.go:202] invalid container ID: ""
E0522 15:51:28.201976      25 runtime.go:202] invalid container ID: ""
--- PASS: TestValidateContainerLogStatus (0.00s)
=== RUN   TestCreateMirrorPod
--- PASS: TestCreateMirrorPod (0.00s)
=== RUN   TestDeleteOutdatedMirrorPod
I0522 15:51:28.207988      25 kubelet.go:1569] Trying to delete pod foo_ns 11111111
W0522 15:51:28.208098      25 kubelet.go:1573] Deleted mirror pod "foo_ns(11111111)" because it is outdated
--- PASS: TestDeleteOutdatedMirrorPod (0.00s)
=== RUN   TestDeleteOrphanedMirrorPods
--- FAIL: TestDeleteOrphanedMirrorPods (0.00s)
    kubelet_test.go:1042: expected 0 creation and one deletion of "pod1_ns", got 0, 0
    kubelet_test.go:1042: expected 0 creation and one deletion of "pod2_ns", got 0, 0
=== RUN   TestGetContainerInfoForMirrorPods
--- PASS: TestGetContainerInfoForMirrorPods (0.00s)
=== RUN   TestNetworkErrorsWithoutHostNetwork
--- PASS: TestNetworkErrorsWithoutHostNetwork (0.00s)
=== RUN   TestFilterOutTerminatedPods
--- PASS: TestFilterOutTerminatedPods (0.00s)
=== RUN   TestSyncPodsSetStatusToFailedForPodsThatRunTooLong
--- PASS: TestSyncPodsSetStatusToFailedForPodsThatRunTooLong (0.00s)
=== RUN   TestSyncPodsDoesNotSetPodsThatDidNotRunTooLongToFailed
--- PASS: TestSyncPodsDoesNotSetPodsThatDidNotRunTooLongToFailed (0.00s)
=== RUN   TestDeletePodDirsForDeletedPods
--- PASS: TestDeletePodDirsForDeletedPods (0.00s)
=== RUN   TestDoesNotDeletePodDirsForTerminatedPods
--- PASS: TestDoesNotDeletePodDirsForTerminatedPods (0.00s)
=== RUN   TestDoesNotDeletePodDirsIfContainerIsRunning
--- PASS: TestDoesNotDeletePodDirsIfContainerIsRunning (0.00s)
=== RUN   TestGetPodsToSync
--- PASS: TestGetPodsToSync (0.01s)
=== RUN   TestGenerateAPIPodStatusWithSortedContainers
--- PASS: TestGenerateAPIPodStatusWithSortedContainers (0.00s)
=== RUN   TestGenerateAPIPodStatusWithReasonCache
--- PASS: TestGenerateAPIPodStatusWithReasonCache (0.00s)
=== RUN   TestGenerateAPIPodStatusWithDifferentRestartPolicies
--- PASS: TestGenerateAPIPodStatusWithDifferentRestartPolicies (0.00s)
=== RUN   TestHandlePodAdditionsInvokesPodAdmitHandlers
--- PASS: TestHandlePodAdditionsInvokesPodAdmitHandlers (0.00s)
=== RUN   TestGetPodsToSyncInvokesPodSyncLoopHandlers
--- PASS: TestGetPodsToSyncInvokesPodSyncLoopHandlers (0.00s)
=== RUN   TestGenerateAPIPodStatusInvokesPodSyncHandlers
--- PASS: TestGenerateAPIPodStatusInvokesPodSyncHandlers (0.00s)
=== RUN   TestSyncPodKillPod
--- PASS: TestSyncPodKillPod (0.00s)
=== RUN   TestCleanupOrphanedPodDirs
=== RUN   TestCleanupOrphanedPodDirs/pod-doesnot-exist-with-subpath-top
W0522 15:51:28.256440      25 kubelet_getters.go:297] Path "/tmp/kubelet_test.645877132/pods/pod1uid/volumes" does not exist
W0522 15:51:28.256579      25 kubelet_getters.go:297] Path "/tmp/kubelet_test.645877132/pods/pod1uid/volumes" does not exist
E0522 15:51:28.256782      25 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
=== RUN   TestCleanupOrphanedPodDirs/nothing-to-do
=== RUN   TestCleanupOrphanedPodDirs/pods-dir-not-found
=== RUN   TestCleanupOrphanedPodDirs/pod-doesnot-exist-novolume
W0522 15:51:28.262207      25 kubelet_getters.go:297] Path "/tmp/kubelet_test.147539813/pods/pod1uid/volumes" does not exist
W0522 15:51:28.262307      25 kubelet_getters.go:297] Path "/tmp/kubelet_test.147539813/pods/pod1uid/volumes" does not exist
=== RUN   TestCleanupOrphanedPodDirs/pod-exists-with-volume
=== RUN   TestCleanupOrphanedPodDirs/pod-doesnot-exist-with-volume
E0522 15:51:28.267011      25 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
=== RUN   TestCleanupOrphanedPodDirs/pod-doesnot-exist-with-subpath
W0522 15:51:28.269348      25 kubelet_getters.go:297] Path "/tmp/kubelet_test.203009202/pods/pod1uid/volumes" does not exist
W0522 15:51:28.269524      25 kubelet_getters.go:297] Path "/tmp/kubelet_test.203009202/pods/pod1uid/volumes" does not exist
E0522 15:51:28.269776      25 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
--- PASS: TestCleanupOrphanedPodDirs (0.02s)
    --- PASS: TestCleanupOrphanedPodDirs/pod-doesnot-exist-with-subpath-top (0.00s)
    --- PASS: TestCleanupOrphanedPodDirs/nothing-to-do (0.00s)
    --- PASS: TestCleanupOrphanedPodDirs/pods-dir-not-found (0.00s)
    --- PASS: TestCleanupOrphanedPodDirs/pod-doesnot-exist-novolume (0.00s)
    --- PASS: TestCleanupOrphanedPodDirs/pod-exists-with-volume (0.00s)
    --- PASS: TestCleanupOrphanedPodDirs/pod-doesnot-exist-with-volume (0.00s)
    --- PASS: TestCleanupOrphanedPodDirs/pod-doesnot-exist-with-subpath (0.00s)
=== RUN   TestListVolumesForPod
I0522 15:51:28.276986      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:28.277717      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:28.282597      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:28.478623      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I0522 15:51:28.478883      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0522 15:51:28.479836      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I0522 15:51:28.480090      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:28.480489      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0522 15:51:28.581245      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0522 15:51:28.581434      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0522 15:51:28.581961      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:28.582255      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:28.582033      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:28.582799      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:28.878419      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestListVolumesForPod (0.61s)
=== RUN   TestPodVolumesExist
I0522 15:51:28.881471      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:28.881559      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:28.884788      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:29.083047      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I0522 15:51:29.083821      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I0522 15:51:29.083293      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0522 15:51:29.084585      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0522 15:51:29.084597      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I0522 15:51:29.085179      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") from node "127.0.0.1" 
I0522 15:51:29.085247      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:29.186636      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") DevicePath "/dev/vdb-test"
I0522 15:51:29.186797      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") DevicePath "/dev/sdb"
I0522 15:51:29.186899      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") device mount path ""
I0522 15:51:29.187916      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") DevicePath "/dev/vdb-test"
I0522 15:51:29.188040      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") DevicePath "/dev/sdb"
I0522 15:51:29.188276      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") device mount path ""
I0522 15:51:29.188732      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") DevicePath "/dev/vdb-test"
I0522 15:51:29.189030      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") DevicePath "/dev/sdb"
I0522 15:51:29.189281      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") device mount path ""
I0522 15:51:29.482110      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestPodVolumesExist (0.60s)
=== RUN   TestVolumeAttachAndMountControllerDisabled
I0522 15:51:29.486067      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:29.486393      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:29.488870      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:29.687771      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:29.688235      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:29.688235      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0522 15:51:29.789808      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0522 15:51:29.790029      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:29.790348      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:30.086970      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeAttachAndMountControllerDisabled (0.60s)
=== RUN   TestVolumeUnmountAndDetachControllerDisabled
I0522 15:51:30.089970      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:30.090057      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:30.093679      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:30.291720      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:30.291801      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0522 15:51:30.291928      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:30.393197      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0522 15:51:30.393594      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:30.393884      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:30.693941      25 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I0522 15:51:30.694450      25 operation_generator.go:783] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I0522 15:51:30.794733      25 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0522 15:51:30.794805      25 operation_generator.go:870] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0522 15:51:30.895507      25 reconciler.go:333] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0522 15:51:30.895549      25 operation_generator.go:472] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0522 15:51:30.941860      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeUnmountAndDetachControllerDisabled (0.85s)
=== RUN   TestVolumeAttachAndMountControllerEnabled
I0522 15:51:30.945114      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:30.945696      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:30.948082      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I0522 15:51:31.146598      25 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:31.147124      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:31.147145      25 operation_generator.go:1333] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0522 15:51:31.248192      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0522 15:51:31.248315      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:31.248389      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:31.546630      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeAttachAndMountControllerEnabled (0.60s)
=== RUN   TestVolumeUnmountAndDetachControllerEnabled
I0522 15:51:31.549502      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:31.549758      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:31.552294      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I0522 15:51:31.751307      25 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:31.751844      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:31.752184      25 operation_generator.go:1333] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0522 15:51:31.855344      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0522 15:51:31.855487      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:31.855558      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:32.156112      25 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I0522 15:51:32.156297      25 operation_generator.go:783] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I0522 15:51:32.257116      25 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0522 15:51:32.257254      25 operation_generator.go:870] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0522 15:51:32.357517      25 reconciler.go:319] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
I0522 15:51:32.401284      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeUnmountAndDetachControllerEnabled (0.85s)
=== RUN   TestGetContainersToDeleteInPodWithFilter
--- PASS: TestGetContainersToDeleteInPodWithFilter (0.00s)
=== RUN   TestGetContainersToDeleteInPod
W0522 15:51:32.403109      25 pod_container_deletor.go:79] Container "abc" not found in pod's containers
--- PASS: TestGetContainersToDeleteInPod (0.00s)
=== RUN   TestGetContainersToDeleteInPodWithNoMatch
--- PASS: TestGetContainersToDeleteInPodWithNoMatch (0.00s)
=== RUN   TestUpdatePod
--- PASS: TestUpdatePod (0.06s)
=== RUN   TestUpdatePodDoesNotForgetSyncPodKill
--- PASS: TestUpdatePodDoesNotForgetSyncPodKill (0.06s)
=== RUN   TestForgetNonExistingPodWorkers
--- PASS: TestForgetNonExistingPodWorkers (0.05s)
=== RUN   TestFakePodWorkers
--- PASS: TestFakePodWorkers (0.00s)
=== RUN   TestKillPodNowFunc
--- PASS: TestKillPodNowFunc (0.00s)
=== RUN   TestReasonCache
--- PASS: TestReasonCache (0.00s)
=== RUN   TestRunOnce
I0522 15:51:32.577652      25 runonce.go:88] Waiting for 1 pods
I0522 15:51:32.577801      25 runonce.go:123] pod "foo_new(12345678)" containers running
I0522 15:51:32.578125      25 runonce.go:102] started pod "foo_new(12345678)"
I0522 15:51:32.578219      25 runonce.go:108] 1 pods started
--- PASS: TestRunOnce (0.00s)
FAIL

				from junit_bazel.xml

Find failedpod_foo(4) mentions in log files | View test history on testgrid


Show 940 Passed Tests

Error lines from build-log.txt

... skipping 21 lines ...
Analyzing: 941 targets (4467 packages loaded, 0 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/bazel_toolchains/rules/rbe_repo.bzl:491:5: Bazel 2.2.0 is used in rbe_default.
Analyzing: 941 targets (4486 packages loaded, 4611 targets configured)
Analyzing: 941 targets (4486 packages loaded, 13352 targets configured)
Analyzing: 941 targets (4486 packages loaded, 28513 targets configured)
Analyzing: 941 targets (4505 packages loaded, 34468 targets configured)
WARNING: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/remote_java_tools_linux/BUILD:671:1: in hdrs attribute of cc_library rule @remote_java_tools_linux//:combiners: Artifact 'external/remote_java_tools_linux/java_tools/src/tools/singlejar/zip_headers.h' is duplicated (through '@remote_java_tools_linux//:transient_bytes' and '@remote_java_tools_linux//:zip_headers'). Since this rule was created by the macro 'cc_library', the error might have been caused by the macro implementation
Analyzing: 941 targets (4507 packages loaded, 35425 targets configured)
Analyzing: 941 targets (4507 packages loaded, 35425 targets configured)
Analyzing: 941 targets (4507 packages loaded, 35425 targets configured)
Analyzing: 941 targets (4507 packages loaded, 35425 targets configured)
Analyzing: 941 targets (4507 packages loaded, 35425 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages server (issue29198.go) and aliases (aliases.go) in /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages a (a.go) and b (b.go) in /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: can't load package: package domain.name/importdecl: cannot find module providing package domain.name/importdecl
gazelle: finding module path for import old.com/one: exit status 1: can't load package: package old.com/one: cannot find module providing package old.com/one
gazelle: finding module path for import titanic.biz/bar: exit status 1: can't load package: package titanic.biz/bar: cannot find module providing package titanic.biz/bar
gazelle: finding module path for import titanic.biz/foo: exit status 1: can't load package: package titanic.biz/foo: cannot find module providing package titanic.biz/foo
gazelle: finding module path for import fruit.io/pear: exit status 1: can't load package: package fruit.io/pear: cannot find module providing package fruit.io/pear
gazelle: finding module path for import fruit.io/banana: exit status 1: can't load package: package fruit.io/banana: cannot find module providing package fruit.io/banana
... skipping 79 lines ...
[12,426 / 14,467] 445 / 941 tests; GoLink staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/linux_amd64_race_stripped/go_default_test; 63s remote ... (279 actions, 278 running)
[12,995 / 14,535] 518 / 941 tests; GoLink staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/linux_amd64_race_stripped/go_default_test; 100s remote ... (320 actions running)
[13,516 / 14,618] 574 / 941 tests; GoLink staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/linux_amd64_race_stripped/go_default_test; 141s remote ... (289 actions, 288 running)
[13,956 / 14,719] 678 / 941 tests; GoLink staging/src/k8s.io/apiserver/pkg/endpoints/handlers/fieldmanager/linux_amd64_race_stripped/go_default_test; 189s remote ... (214 actions running)
[14,695 / 14,861] 862 / 941 tests; GoLink pkg/kubelet/cm/cpumanager/linux_amd64_race_stripped/go_default_test; 36s remote ... (94 actions running)
[14,928 / 14,935] 937 / 941 tests; GoLink cmd/kubelet/app/linux_amd64_race_stripped/go_default_test; 28s remote ... (4 actions running)
FAIL: //pkg/kubelet:go_default_test (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_1.log)
FAIL: //pkg/kubelet:go_default_test (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_2.log)
FAIL: //pkg/kubelet:go_default_test (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log)

FAILED: //pkg/kubelet:go_default_test (Summary)
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_1.log
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_2.log
INFO: From Testing //pkg/kubelet:go_default_test:
==================== Test output for //pkg/kubelet:go_default_test:
=== RUN   TestActiveDeadlineHandler
--- PASS: TestActiveDeadlineHandler (0.00s)
=== RUN   TestKubeletDirs
--- PASS: TestKubeletDirs (0.00s)
=== RUN   TestGetIPTablesMark
--- PASS: TestGetIPTablesMark (0.00s)
=== RUN   TestUpdateNewNodeStatus
=== RUN   TestUpdateNewNodeStatus/5_image_limit
E0522 15:50:51.188675      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:50:51.189808      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
=== RUN   TestUpdateNewNodeStatus/no_image_limit
E0522 15:50:51.216811      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:50:51.217496      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
--- PASS: TestUpdateNewNodeStatus (0.04s)
    --- PASS: TestUpdateNewNodeStatus/5_image_limit (0.02s)
    --- PASS: TestUpdateNewNodeStatus/no_image_limit (0.02s)
=== RUN   TestUpdateExistingNodeStatus
E0522 15:50:51.237830      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:50:51.240469      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
--- PASS: TestUpdateExistingNodeStatus (0.01s)
=== RUN   TestUpdateExistingNodeStatusTimeout
E0522 15:50:52.251949      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40941/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0522 15:50:53.253358      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40941/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0522 15:50:54.254598      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40941/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0522 15:50:55.255596      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40941/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0522 15:50:56.256706      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:40941/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
--- PASS: TestUpdateExistingNodeStatusTimeout (5.01s)
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:516: accept tcp 127.0.0.1:40941: use of closed network connection
=== RUN   TestUpdateNodeStatusWithRuntimeStateError
E0522 15:50:56.261420      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:50:56.262628      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
I0522 15:50:56.277779      25 setters.go:555] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-05-22 15:50:26.27755459 +0000 UTC m=-24.569285281 LastTransitionTime:2020-05-22 15:50:26.27755459 +0000 UTC m=-24.569285281 Reason:KubeletNotReady Message:container runtime is down}
E0522 15:50:56.285267      25 kubelet.go:2097] Container runtime sanity check failed: injected runtime status error
E0522 15:50:56.292441      25 kubelet.go:2101] Container runtime status is nil
E0522 15:50:56.299486      25 kubelet.go:2110] Container runtime network not ready: <nil>
E0522 15:50:56.299597      25 kubelet.go:2121] Container runtime not ready: <nil>
E0522 15:50:56.306917      25 kubelet.go:2121] Container runtime not ready: RuntimeReady=false reason: message:
E0522 15:50:56.321415      25 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason: message:
I0522 15:50:56.321773      25 setters.go:555] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-05-22 15:50:56.285250476 +0000 UTC m=+5.438410584 LastTransitionTime:2020-05-22 15:50:56.285250476 +0000 UTC m=+5.438410584 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
--- PASS: TestUpdateNodeStatusWithRuntimeStateError (0.07s)
=== RUN   TestUpdateNodeStatusError
E0522 15:50:56.334967      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:50:56.335502      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:50:56.335873      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:50:56.336237      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:50:56.336628      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
--- PASS: TestUpdateNodeStatusError (0.00s)
=== RUN   TestUpdateNodeStatusWithLease
E0522 15:50:56.343171      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:50:56.343974      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
I0522 15:50:56.368097      25 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
--- PASS: TestUpdateNodeStatusWithLease (0.03s)
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease/no_volumes_and_no_update
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease/volumes_inuse_on_node_and_volumeManager
... skipping 13 lines ...
I0522 15:50:56.496780      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:50:56.498067      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:50:56.498122      25 kubelet_node_status.go:201] Controller attach-detach setting changed to false; updating existing Node
I0522 15:50:56.501545      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:50:56.501600      25 kubelet_node_status.go:204] Controller attach-detach setting changed to true; updating existing Node
E0522 15:50:56.504635      25 kubelet_node_status.go:91] Unable to register node "127.0.0.1" with API server: 
E0522 15:50:56.505646      25 kubelet_node_status.go:97] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I0522 15:50:56.506602      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:50:56.506650      25 kubelet_node_status.go:201] Controller attach-detach setting changed to false; updating existing Node
E0522 15:50:56.507481      25 kubelet_node_status.go:121] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
--- PASS: TestTryRegisterWithApiServer (0.02s)
    kubelet_node_status_test.go:1302: actions: 3: [{ActionImpl:{Namespace: Verb:create Resource:/v1, Resource=nodes Subresource:} Name: Object:&Node{ObjectMeta:{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:127.0.0.1 kubernetes.io/os:linux] map[] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}} {ActionImpl:{Namespace: Verb:get Resource:/v1, Resource=nodes Subresource:} Name:127.0.0.1} {ActionImpl:{Namespace: Verb:patch Resource:/v1, Resource=nodes Subresource:status} Name:127.0.0.1 PatchType:application/strategic-merge-patch+json Patch:[123 34 109 101 116 97 100 97 116 97 34 58 123 34 97 110 110 111 116 97 116 105 111 110 115 34 58 110 117 108 108 125 125]}]
    kubelet_node_status_test.go:1302: actions: 3: [{ActionImpl:{Namespace: Verb:create Resource:/v1, Resource=nodes Subresource:} Name: Object:&Node{ObjectMeta:{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:127.0.0.1 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}} {ActionImpl:{Namespace: Verb:get Resource:/v1, Resource=nodes Subresource:} Name:127.0.0.1} {ActionImpl:{Namespace: Verb:patch Resource:/v1, Resource=nodes Subresource:status} Name:127.0.0.1 PatchType:application/strategic-merge-patch+json Patch:[123 34 109 101 116 97 100 97 116 97 34 58 123 34 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 34 116 114 117 101 34 125 125 125]}]
=== RUN   TestUpdateNewNodeStatusTooLargeReservation
E0522 15:50:56.524372      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:50:56.525343      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
--- PASS: TestUpdateNewNodeStatusTooLargeReservation (0.01s)
=== RUN   TestUpdateDefaultLabels
--- PASS: TestUpdateDefaultLabels (0.00s)
=== RUN   TestReconcileExtendedResource
I0522 15:50:56.538244      25 kubelet_node_status.go:136] Zero out resource test.com/resource1 capacity in existing node.
... skipping 143 lines ...
--- PASS: TestHandleHostNameConflicts (0.00s)
=== RUN   TestHandleNodeSelector
--- PASS: TestHandleNodeSelector (0.00s)
=== RUN   TestHandleMemExceeded
--- PASS: TestHandleMemExceeded (0.00s)
=== RUN   TestHandlePluginResources
W0522 15:50:56.832502      25 predicate.go:77] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
--- PASS: TestHandlePluginResources (0.01s)
=== RUN   TestPurgingObsoleteStatusMapEntries
--- PASS: TestPurgingObsoleteStatusMapEntries (0.00s)
=== RUN   TestValidateContainerLogStatus
E0522 15:50:56.842632      25 runtime.go:202] invalid container ID: ""
E0522 15:50:56.842986      25 runtime.go:202] invalid container ID: ""
... skipping 2 lines ...
--- PASS: TestCreateMirrorPod (0.00s)
=== RUN   TestDeleteOutdatedMirrorPod
I0522 15:50:56.850148      25 kubelet.go:1569] Trying to delete pod foo_ns 11111111
W0522 15:50:56.850233      25 kubelet.go:1573] Deleted mirror pod "foo_ns(11111111)" because it is outdated
--- PASS: TestDeleteOutdatedMirrorPod (0.00s)
=== RUN   TestDeleteOrphanedMirrorPods
--- FAIL: TestDeleteOrphanedMirrorPods (0.00s)
    kubelet_test.go:1042: expected 0 creation and one deletion of "pod1_ns", got 0, 0
    kubelet_test.go:1042: expected 0 creation and one deletion of "pod2_ns", got 0, 0
=== RUN   TestGetContainerInfoForMirrorPods
--- PASS: TestGetContainerInfoForMirrorPods (0.00s)
=== RUN   TestNetworkErrorsWithoutHostNetwork
--- PASS: TestNetworkErrorsWithoutHostNetwork (0.00s)
... skipping 50 lines ...
    --- PASS: TestCleanupOrphanedPodDirs/pod-doesnot-exist-with-subpath-top (0.00s)
    --- PASS: TestCleanupOrphanedPodDirs/nothing-to-do (0.00s)
    --- PASS: TestCleanupOrphanedPodDirs/pods-dir-not-found (0.00s)
=== RUN   TestListVolumesForPod
I0522 15:50:56.924201      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:50:56.924778      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:50:56.928436      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:50:57.126274      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I0522 15:50:57.127213      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I0522 15:50:57.127276      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:50:57.126478      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0522 15:50:57.128074      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0522 15:50:57.228743      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
... skipping 4 lines ...
I0522 15:50:57.230365      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") device mount path ""
I0522 15:50:57.525356      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestListVolumesForPod (0.60s)
=== RUN   TestPodVolumesExist
I0522 15:50:57.529152      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:50:57.529494      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:50:57.532414      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:50:57.731281      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I0522 15:50:57.731409      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") from node "127.0.0.1" 
I0522 15:50:57.731864      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0522 15:50:57.732102      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I0522 15:50:57.732651      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I0522 15:50:57.732715      25 reconciler.go:157] Reconciler: start to sync state
... skipping 9 lines ...
I0522 15:50:57.836586      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") device mount path ""
I0522 15:50:58.130319      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestPodVolumesExist (0.60s)
=== RUN   TestVolumeAttachAndMountControllerDisabled
I0522 15:50:58.134290      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:50:58.134439      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:50:58.138544      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:50:58.236232      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:50:58.236377      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:50:58.237063      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0522 15:50:58.337350      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0522 15:50:58.337490      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:50:58.337584      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0522 15:50:58.435072      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeAttachAndMountControllerDisabled (0.30s)
=== RUN   TestVolumeUnmountAndDetachControllerDisabled
I0522 15:50:58.438927      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:50:58.441289      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:50:58.442549      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:50:58.642115      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:50:58.642257      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:50:58.642933      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0522 15:50:58.743565      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0522 15:50:58.743700      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:50:58.743785      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0522 15:50:59.144791      25 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I0522 15:50:59.145127      25 operation_generator.go:783] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I0522 15:50:59.245511      25 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0522 15:50:59.245628      25 operation_generator.go:870] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0522 15:50:59.346158      25 reconciler.go:333] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0522 15:50:59.346186      25 operation_generator.go:472] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
E0522 15:50:59.368724      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:50:59.440686      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeUnmountAndDetachControllerDisabled (1.01s)
=== RUN   TestVolumeAttachAndMountControllerEnabled
I0522 15:50:59.444376      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:50:59.444650      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:50:59.447726      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I0522 15:50:59.646375      25 operation_generator.go:1333] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0522 15:50:59.646714      25 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:50:59.646922      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:50:59.747885      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0522 15:50:59.748022      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:50:59.748107      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:00.045357      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeAttachAndMountControllerEnabled (0.60s)
=== RUN   TestVolumeUnmountAndDetachControllerEnabled
I0522 15:51:00.048645      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:00.048985      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:00.052219      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I0522 15:51:00.250383      25 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:00.250654      25 operation_generator.go:1333] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0522 15:51:00.251003      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:00.352374      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0522 15:51:00.352964      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:00.353219      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
... skipping 26 lines ...
=== RUN   TestRunOnce
I0522 15:51:01.076385      25 runonce.go:88] Waiting for 1 pods
I0522 15:51:01.076612      25 runonce.go:123] pod "foo_new(12345678)" containers running
I0522 15:51:01.076971      25 runonce.go:102] started pod "foo_new(12345678)"
I0522 15:51:01.077070      25 runonce.go:108] 1 pods started
--- PASS: TestRunOnce (0.00s)
FAIL
================================================================================
==================== Test output for //pkg/kubelet:go_default_test:
=== RUN   TestActiveDeadlineHandler
--- PASS: TestActiveDeadlineHandler (0.00s)
=== RUN   TestKubeletDirs
--- PASS: TestKubeletDirs (0.00s)
=== RUN   TestGetIPTablesMark
--- PASS: TestGetIPTablesMark (0.00s)
=== RUN   TestUpdateNewNodeStatus
=== RUN   TestUpdateNewNodeStatus/5_image_limit
E0522 15:51:06.315199      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:06.318186      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
=== RUN   TestUpdateNewNodeStatus/no_image_limit
E0522 15:51:06.338525      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:06.339243      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
--- PASS: TestUpdateNewNodeStatus (0.04s)
    --- PASS: TestUpdateNewNodeStatus/5_image_limit (0.02s)
    --- PASS: TestUpdateNewNodeStatus/no_image_limit (0.02s)
=== RUN   TestUpdateExistingNodeStatus
E0522 15:51:06.354600      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
--- PASS: TestUpdateExistingNodeStatus (0.01s)
=== RUN   TestUpdateExistingNodeStatusTimeout
I0522 15:51:06.358218      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
E0522 15:51:07.366096      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:34657/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0522 15:51:08.367098      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:34657/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0522 15:51:09.368155      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:34657/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E0522 15:51:10.369583      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:34657/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0522 15:51:11.370632      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:34657/api/v1/nodes/127.0.0.1?timeout=1s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
--- PASS: TestUpdateExistingNodeStatusTimeout (5.01s)
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:516: accept tcp 127.0.0.1:34657: use of closed network connection
=== RUN   TestUpdateNodeStatusWithRuntimeStateError
E0522 15:51:11.374273      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:11.374977      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
I0522 15:51:11.392611      25 setters.go:555] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-05-22 15:50:41.392387568 +0000 UTC m=-24.588166235 LastTransitionTime:2020-05-22 15:50:41.392387568 +0000 UTC m=-24.588166235 Reason:KubeletNotReady Message:container runtime is down}
E0522 15:51:11.400685      25 kubelet.go:2097] Container runtime sanity check failed: injected runtime status error
E0522 15:51:11.408557      25 kubelet.go:2101] Container runtime status is nil
E0522 15:51:11.416369      25 kubelet.go:2110] Container runtime network not ready: <nil>
E0522 15:51:11.416484      25 kubelet.go:2121] Container runtime not ready: <nil>
E0522 15:51:11.424419      25 kubelet.go:2121] Container runtime not ready: RuntimeReady=false reason: message:
E0522 15:51:11.439092      25 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason: message:
I0522 15:51:11.439717      25 setters.go:555] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-05-22 15:51:11.400669355 +0000 UTC m=+5.420115577 LastTransitionTime:2020-05-22 15:51:11.400669355 +0000 UTC m=+5.420115577 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
--- PASS: TestUpdateNodeStatusWithRuntimeStateError (0.08s)
=== RUN   TestUpdateNodeStatusError
E0522 15:51:11.456296      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:51:11.456712      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:51:11.457018      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:51:11.457296      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:51:11.457573      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
--- PASS: TestUpdateNodeStatusError (0.00s)
=== RUN   TestUpdateNodeStatusWithLease
E0522 15:51:11.460719      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:11.461421      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
I0522 15:51:11.484683      25 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
--- PASS: TestUpdateNodeStatusWithLease (0.03s)
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease/no_volumes_and_no_update
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease/volumes_inuse_on_node_and_volumeManager
... skipping 13 lines ...
I0522 15:51:11.609565      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:11.610744      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:11.610796      25 kubelet_node_status.go:201] Controller attach-detach setting changed to false; updating existing Node
I0522 15:51:11.614055      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:11.614102      25 kubelet_node_status.go:204] Controller attach-detach setting changed to true; updating existing Node
E0522 15:51:11.617308      25 kubelet_node_status.go:91] Unable to register node "127.0.0.1" with API server: 
E0522 15:51:11.619675      25 kubelet_node_status.go:97] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I0522 15:51:11.621888      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:11.622248      25 kubelet_node_status.go:201] Controller attach-detach setting changed to false; updating existing Node
E0522 15:51:11.623623      25 kubelet_node_status.go:121] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
--- PASS: TestTryRegisterWithApiServer (0.02s)
    kubelet_node_status_test.go:1302: actions: 3: [{ActionImpl:{Namespace: Verb:create Resource:/v1, Resource=nodes Subresource:} Name: Object:&Node{ObjectMeta:{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:127.0.0.1 kubernetes.io/os:linux] map[] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}} {ActionImpl:{Namespace: Verb:get Resource:/v1, Resource=nodes Subresource:} Name:127.0.0.1} {ActionImpl:{Namespace: Verb:patch Resource:/v1, Resource=nodes Subresource:status} Name:127.0.0.1 PatchType:application/strategic-merge-patch+json Patch:[123 34 109 101 116 97 100 97 116 97 34 58 123 34 97 110 110 111 116 97 116 105 111 110 115 34 58 110 117 108 108 125 125]}]
    kubelet_node_status_test.go:1302: actions: 3: [{ActionImpl:{Namespace: Verb:create Resource:/v1, Resource=nodes Subresource:} Name: Object:&Node{ObjectMeta:{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:127.0.0.1 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}} {ActionImpl:{Namespace: Verb:get Resource:/v1, Resource=nodes Subresource:} Name:127.0.0.1} {ActionImpl:{Namespace: Verb:patch Resource:/v1, Resource=nodes Subresource:status} Name:127.0.0.1 PatchType:application/strategic-merge-patch+json Patch:[123 34 109 101 116 97 100 97 116 97 34 58 123 34 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 34 116 114 117 101 34 125 125 125]}]
=== RUN   TestUpdateNewNodeStatusTooLargeReservation
E0522 15:51:11.633633      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:11.634586      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
--- PASS: TestUpdateNewNodeStatusTooLargeReservation (0.01s)
=== RUN   TestUpdateDefaultLabels
--- PASS: TestUpdateDefaultLabels (0.00s)
=== RUN   TestReconcileExtendedResource
I0522 15:51:11.646712      25 kubelet_node_status.go:136] Zero out resource test.com/resource2 capacity in existing node.
... skipping 143 lines ...
--- PASS: TestHandleHostNameConflicts (0.00s)
=== RUN   TestHandleNodeSelector
--- PASS: TestHandleNodeSelector (0.00s)
=== RUN   TestHandleMemExceeded
--- PASS: TestHandleMemExceeded (0.00s)
=== RUN   TestHandlePluginResources
W0522 15:51:11.933549      25 predicate.go:77] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
--- PASS: TestHandlePluginResources (0.01s)
=== RUN   TestPurgingObsoleteStatusMapEntries
--- PASS: TestPurgingObsoleteStatusMapEntries (0.00s)
=== RUN   TestValidateContainerLogStatus
E0522 15:51:11.940528      25 runtime.go:202] invalid container ID: ""
E0522 15:51:11.940639      25 runtime.go:202] invalid container ID: ""
... skipping 2 lines ...
--- PASS: TestCreateMirrorPod (0.00s)
=== RUN   TestDeleteOutdatedMirrorPod
I0522 15:51:11.947333      25 kubelet.go:1569] Trying to delete pod foo_ns 11111111
W0522 15:51:11.947412      25 kubelet.go:1573] Deleted mirror pod "foo_ns(11111111)" because it is outdated
--- PASS: TestDeleteOutdatedMirrorPod (0.00s)
=== RUN   TestDeleteOrphanedMirrorPods
--- FAIL: TestDeleteOrphanedMirrorPods (0.00s)
    kubelet_test.go:1042: expected 0 creation and one deletion of "pod1_ns", got 0, 0
    kubelet_test.go:1042: expected 0 creation and one deletion of "pod2_ns", got 0, 0
=== RUN   TestGetContainerInfoForMirrorPods
--- PASS: TestGetContainerInfoForMirrorPods (0.00s)
=== RUN   TestNetworkErrorsWithoutHostNetwork
--- PASS: TestNetworkErrorsWithoutHostNetwork (0.00s)
... skipping 50 lines ...
    --- PASS: TestCleanupOrphanedPodDirs/pods-dir-not-found (0.00s)
    --- PASS: TestCleanupOrphanedPodDirs/pod-doesnot-exist-novolume (0.00s)
    --- PASS: TestCleanupOrphanedPodDirs/pod-exists-with-volume (0.00s)
=== RUN   TestListVolumesForPod
I0522 15:51:12.020602      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:12.020921      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:12.024568      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:12.222675      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I0522 15:51:12.222915      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0522 15:51:12.222991      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I0522 15:51:12.223178      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0522 15:51:12.223299      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:12.324383      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
... skipping 4 lines ...
I0522 15:51:12.324737      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:12.621447      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestListVolumesForPod (0.60s)
=== RUN   TestPodVolumesExist
I0522 15:51:12.627000      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:12.628468      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:12.633588      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:12.830069      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I0522 15:51:12.830706      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") from node "127.0.0.1" 
I0522 15:51:12.831088      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I0522 15:51:12.831533      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0522 15:51:12.831748      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I0522 15:51:12.831917      25 reconciler.go:157] Reconciler: start to sync state
... skipping 9 lines ...
I0522 15:51:12.936229      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") device mount path ""
I0522 15:51:13.227756      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestPodVolumesExist (0.61s)
=== RUN   TestVolumeAttachAndMountControllerDisabled
I0522 15:51:13.231377      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:13.231497      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:13.234651      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:13.433340      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:13.433829      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:13.433861      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0522 15:51:13.535085      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0522 15:51:13.535438      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:13.535637      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:13.832550      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeAttachAndMountControllerDisabled (0.60s)
=== RUN   TestVolumeUnmountAndDetachControllerDisabled
I0522 15:51:13.835787      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:13.835791      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:13.839121      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:14.039641      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:14.040125      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:14.040093      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0522 15:51:14.141380      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0522 15:51:14.141547      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:14.141723      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:14.542819      25 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I0522 15:51:14.543223      25 operation_generator.go:783] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I0522 15:51:14.643740      25 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0522 15:51:14.644061      25 operation_generator.go:870] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I0522 15:51:14.744522      25 reconciler.go:333] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0522 15:51:14.744760      25 operation_generator.go:472] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
E0522 15:51:14.765508      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:14.837754      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeUnmountAndDetachControllerDisabled (1.00s)
=== RUN   TestVolumeAttachAndMountControllerEnabled
I0522 15:51:14.840836      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:14.840951      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:14.843766      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I0522 15:51:15.042333      25 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:15.042864      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:15.042880      25 operation_generator.go:1333] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0522 15:51:15.144268      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0522 15:51:15.144598      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:15.144899      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:15.442021      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeAttachAndMountControllerEnabled (0.60s)
=== RUN   TestVolumeUnmountAndDetachControllerEnabled
I0522 15:51:15.445443      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:15.446057      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:15.448815      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I0522 15:51:15.647726      25 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:15.647852      25 operation_generator.go:1333] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0522 15:51:15.648159      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:15.749670      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0522 15:51:15.750154      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:15.750420      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
... skipping 9 lines ...
=== RUN   TestGetContainersToDeleteInPod
W0522 15:51:16.301721      25 pod_container_deletor.go:79] Container "abc" not found in pod's containers
--- PASS: TestGetContainersToDeleteInPod (0.00s)
=== RUN   TestGetContainersToDeleteInPodWithNoMatch
--- PASS: TestGetContainersToDeleteInPodWithNoMatch (0.00s)
=== RUN   TestUpdatePod
E0522 15:51:16.316378      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E0522 15:51:16.339190      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E0522 15:51:16.355200      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
--- PASS: TestUpdatePod (0.06s)
=== RUN   TestUpdatePodDoesNotForgetSyncPodKill
--- PASS: TestUpdatePodDoesNotForgetSyncPodKill (0.06s)
=== RUN   TestForgetNonExistingPodWorkers
--- PASS: TestForgetNonExistingPodWorkers (0.05s)
=== RUN   TestFakePodWorkers
... skipping 5 lines ...
=== RUN   TestRunOnce
I0522 15:51:16.478617      25 runonce.go:88] Waiting for 1 pods
I0522 15:51:16.478731      25 runonce.go:123] pod "foo_new(12345678)" containers running
I0522 15:51:16.479128      25 runonce.go:102] started pod "foo_new(12345678)"
I0522 15:51:16.479214      25 runonce.go:108] 1 pods started
--- PASS: TestRunOnce (0.00s)
FAIL
================================================================================
==================== Test output for //pkg/kubelet:go_default_test:
=== RUN   TestActiveDeadlineHandler
--- PASS: TestActiveDeadlineHandler (0.00s)
=== RUN   TestKubeletDirs
--- PASS: TestKubeletDirs (0.00s)
=== RUN   TestGetIPTablesMark
--- PASS: TestGetIPTablesMark (0.00s)
=== RUN   TestUpdateNewNodeStatus
=== RUN   TestUpdateNewNodeStatus/5_image_limit
E0522 15:51:22.582170      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:22.583108      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
=== RUN   TestUpdateNewNodeStatus/no_image_limit
I0522 15:51:22.602666      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
E0522 15:51:22.603310      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
--- PASS: TestUpdateNewNodeStatus (0.03s)
    --- PASS: TestUpdateNewNodeStatus/5_image_limit (0.01s)
    --- PASS: TestUpdateNewNodeStatus/no_image_limit (0.01s)
=== RUN   TestUpdateExistingNodeStatus
E0522 15:51:22.610989      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:22.612058      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
--- PASS: TestUpdateExistingNodeStatus (0.01s)
=== RUN   TestUpdateExistingNodeStatusTimeout
E0522 15:51:23.630205      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:46647/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0522 15:51:24.631378      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:46647/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0522 15:51:25.632534      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:46647/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0522 15:51:26.633789      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:46647/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E0522 15:51:27.634749      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": Get http://127.0.0.1:46647/api/v1/nodes/127.0.0.1?timeout=1s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
--- PASS: TestUpdateExistingNodeStatusTimeout (5.01s)
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:519: accepted connection
    kubelet_node_status_test.go:516: accept tcp 127.0.0.1:46647: use of closed network connection
=== RUN   TestUpdateNodeStatusWithRuntimeStateError
E0522 15:51:27.638172      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:27.639099      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
I0522 15:51:27.654063      25 setters.go:555] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-05-22 15:50:57.653892989 +0000 UTC m=-24.604555141 LastTransitionTime:2020-05-22 15:50:57.653892989 +0000 UTC m=-24.604555141 Reason:KubeletNotReady Message:container runtime is down}
E0522 15:51:27.667033      25 kubelet.go:2097] Container runtime sanity check failed: injected runtime status error
E0522 15:51:27.673601      25 kubelet.go:2101] Container runtime status is nil
E0522 15:51:27.679748      25 kubelet.go:2110] Container runtime network not ready: <nil>
E0522 15:51:27.679862      25 kubelet.go:2121] Container runtime not ready: <nil>
E0522 15:51:27.686192      25 kubelet.go:2121] Container runtime not ready: RuntimeReady=false reason: message:
E0522 15:51:27.698727      25 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason: message:
I0522 15:51:27.699092      25 setters.go:555] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-05-22 15:51:27.667016766 +0000 UTC m=+5.408568647 LastTransitionTime:2020-05-22 15:51:27.667016766 +0000 UTC m=+5.408568647 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
--- PASS: TestUpdateNodeStatusWithRuntimeStateError (0.07s)
=== RUN   TestUpdateNodeStatusError
E0522 15:51:27.707180      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:51:27.707266      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:51:27.707403      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:51:27.707512      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E0522 15:51:27.707637      25 kubelet_node_status.go:397] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
--- PASS: TestUpdateNodeStatusError (0.00s)
=== RUN   TestUpdateNodeStatusWithLease
E0522 15:51:27.715436      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:27.716033      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
I0522 15:51:27.736732      25 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
--- PASS: TestUpdateNodeStatusWithLease (0.03s)
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease/no_volumes_and_no_update
=== RUN   TestUpdateNodeStatusAndVolumesInUseWithNodeLease/volumes_inuse_on_node_and_volumeManager
... skipping 13 lines ...
I0522 15:51:27.865799      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:27.867440      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:27.867648      25 kubelet_node_status.go:201] Controller attach-detach setting changed to false; updating existing Node
I0522 15:51:27.873031      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:27.873265      25 kubelet_node_status.go:204] Controller attach-detach setting changed to true; updating existing Node
E0522 15:51:27.876690      25 kubelet_node_status.go:91] Unable to register node "127.0.0.1" with API server: 
E0522 15:51:27.877906      25 kubelet_node_status.go:97] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I0522 15:51:27.878928      25 kubelet_node_status.go:111] Node 127.0.0.1 was previously registered
I0522 15:51:27.878972      25 kubelet_node_status.go:201] Controller attach-detach setting changed to false; updating existing Node
E0522 15:51:27.879801      25 kubelet_node_status.go:121] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
--- PASS: TestTryRegisterWithApiServer (0.02s)
    kubelet_node_status_test.go:1302: actions: 3: [{ActionImpl:{Namespace: Verb:create Resource:/v1, Resource=nodes Subresource:} Name: Object:&Node{ObjectMeta:{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:127.0.0.1 kubernetes.io/os:linux] map[] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}} {ActionImpl:{Namespace: Verb:get Resource:/v1, Resource=nodes Subresource:} Name:127.0.0.1} {ActionImpl:{Namespace: Verb:patch Resource:/v1, Resource=nodes Subresource:status} Name:127.0.0.1 PatchType:application/strategic-merge-patch+json Patch:[123 34 109 101 116 97 100 97 116 97 34 58 123 34 97 110 110 111 116 97 116 105 111 110 115 34 58 110 117 108 108 125 125]}]
    kubelet_node_status_test.go:1302: actions: 3: [{ActionImpl:{Namespace: Verb:create Resource:/v1, Resource=nodes Subresource:} Name: Object:&Node{ObjectMeta:{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[kubernetes.io/arch:amd64 kubernetes.io/hostname:127.0.0.1 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}} {ActionImpl:{Namespace: Verb:get Resource:/v1, Resource=nodes Subresource:} Name:127.0.0.1} {ActionImpl:{Namespace: Verb:patch Resource:/v1, Resource=nodes Subresource:status} Name:127.0.0.1 PatchType:application/strategic-merge-patch+json Patch:[123 34 109 101 116 97 100 97 116 97 34 58 123 34 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 118 111 108 117 109 101 115 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 99 111 110 116 114 111 108 108 101 114 45 109 97 110 97 103 101 100 45 97 116 116 97 99 104 45 100 101 116 97 99 104 34 58 34 116 114 117 101 34 125 125 125]}]
=== RUN   TestUpdateNewNodeStatusTooLargeReservation
E0522 15:51:27.884884      25 eviction_manager.go:255] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I0522 15:51:27.885807      25 plugin_manager.go:114] Starting Kubelet Plugin Manager
--- PASS: TestUpdateNewNodeStatusTooLargeReservation (0.01s)
=== RUN   TestUpdateDefaultLabels
--- PASS: TestUpdateDefaultLabels (0.00s)
=== RUN   TestReconcileExtendedResource
I0522 15:51:27.896806      25 kubelet_node_status.go:136] Zero out resource test.com/resource1 capacity in existing node.
... skipping 143 lines ...
--- PASS: TestHandleHostNameConflicts (0.00s)
=== RUN   TestHandleNodeSelector
--- PASS: TestHandleNodeSelector (0.00s)
=== RUN   TestHandleMemExceeded
--- PASS: TestHandleMemExceeded (0.00s)
=== RUN   TestHandlePluginResources
W0522 15:51:28.188745      25 predicate.go:77] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
--- PASS: TestHandlePluginResources (0.01s)
=== RUN   TestPurgingObsoleteStatusMapEntries
--- PASS: TestPurgingObsoleteStatusMapEntries (0.00s)
=== RUN   TestValidateContainerLogStatus
E0522 15:51:28.201758      25 runtime.go:202] invalid container ID: ""
E0522 15:51:28.201976      25 runtime.go:202] invalid container ID: ""
... skipping 2 lines ...
--- PASS: TestCreateMirrorPod (0.00s)
=== RUN   TestDeleteOutdatedMirrorPod
I0522 15:51:28.207988      25 kubelet.go:1569] Trying to delete pod foo_ns 11111111
W0522 15:51:28.208098      25 kubelet.go:1573] Deleted mirror pod "foo_ns(11111111)" because it is outdated
--- PASS: TestDeleteOutdatedMirrorPod (0.00s)
=== RUN   TestDeleteOrphanedMirrorPods
--- FAIL: TestDeleteOrphanedMirrorPods (0.00s)
    kubelet_test.go:1042: expected 0 creation and one deletion of "pod1_ns", got 0, 0
    kubelet_test.go:1042: expected 0 creation and one deletion of "pod2_ns", got 0, 0
=== RUN   TestGetContainerInfoForMirrorPods
--- PASS: TestGetContainerInfoForMirrorPods (0.00s)
=== RUN   TestNetworkErrorsWithoutHostNetwork
--- PASS: TestNetworkErrorsWithoutHostNetwork (0.00s)
... skipping 50 lines ...
    --- PASS: TestCleanupOrphanedPodDirs/pod-exists-with-volume (0.00s)
    --- PASS: TestCleanupOrphanedPodDirs/pod-doesnot-exist-with-volume (0.00s)
    --- PASS: TestCleanupOrphanedPodDirs/pod-doesnot-exist-with-subpath (0.00s)
=== RUN   TestListVolumesForPod
I0522 15:51:28.276986      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:28.277717      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:28.282597      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:28.478623      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I0522 15:51:28.478883      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0522 15:51:28.479836      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I0522 15:51:28.480090      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:28.480489      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0522 15:51:28.581245      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
... skipping 4 lines ...
I0522 15:51:28.582799      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:28.878419      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestListVolumesForPod (0.61s)
=== RUN   TestPodVolumesExist
I0522 15:51:28.881471      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:28.881559      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:28.884788      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:29.083047      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I0522 15:51:29.083821      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I0522 15:51:29.083293      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I0522 15:51:29.084585      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I0522 15:51:29.084597      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I0522 15:51:29.085179      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") from node "127.0.0.1" 
... skipping 9 lines ...
I0522 15:51:29.189281      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") device mount path ""
I0522 15:51:29.482110      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestPodVolumesExist (0.60s)
=== RUN   TestVolumeAttachAndMountControllerDisabled
I0522 15:51:29.486067      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:29.486393      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:29.488870      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:29.687771      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:29.688235      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:29.688235      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0522 15:51:29.789808      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0522 15:51:29.790029      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:29.790348      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:30.086970      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeAttachAndMountControllerDisabled (0.60s)
=== RUN   TestVolumeUnmountAndDetachControllerDisabled
I0522 15:51:30.089970      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:30.090057      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:30.093679      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I0522 15:51:30.291720      25 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:30.291801      25 operation_generator.go:361] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I0522 15:51:30.291928      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:30.393197      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I0522 15:51:30.393594      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:30.393884      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
... skipping 5 lines ...
I0522 15:51:30.895549      25 operation_generator.go:472] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I0522 15:51:30.941860      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeUnmountAndDetachControllerDisabled (0.85s)
=== RUN   TestVolumeAttachAndMountControllerEnabled
I0522 15:51:30.945114      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:30.945696      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:30.948082      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I0522 15:51:31.146598      25 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:31.147124      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:31.147145      25 operation_generator.go:1333] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0522 15:51:31.248192      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0522 15:51:31.248315      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:31.248389      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I0522 15:51:31.546630      25 volume_manager.go:276] Shutting down Kubelet Volume Manager
--- PASS: TestVolumeAttachAndMountControllerEnabled (0.60s)
=== RUN   TestVolumeUnmountAndDetachControllerEnabled
I0522 15:51:31.549502      25 volume_manager.go:265] Starting Kubelet Volume Manager
I0522 15:51:31.549758      25 desired_state_of_world_populator.go:139] Desired state populator starts to run
E0522 15:51:31.552294      25 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I0522 15:51:31.751307      25 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I0522 15:51:31.751844      25 reconciler.go:157] Reconciler: start to sync state
I0522 15:51:31.752184      25 operation_generator.go:1333] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I0522 15:51:31.855344      25 operation_generator.go:558] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I0522 15:51:31.855487      25 operation_generator.go:567] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I0522 15:51:31.855558      25 operation_generator.go:596] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
... skipping 26 lines ...
=== RUN   TestRunOnce
I0522 15:51:32.577652      25 runonce.go:88] Waiting for 1 pods
I0522 15:51:32.577801      25 runonce.go:123] pod "foo_new(12345678)" containers running
I0522 15:51:32.578125      25 runonce.go:102] started pod "foo_new(12345678)"
I0522 15:51:32.578219      25 runonce.go:108] 1 pods started
--- PASS: TestRunOnce (0.00s)
FAIL
================================================================================
[14,934 / 14,936] 939 / 941 tests, 1 failed; GoLink cmd/genkubedocs/linux_amd64_race_stripped/go_default_test; 97s remote ... (2 actions, 1 running)
INFO: Elapsed time: 623.580s, Critical Path: 510.45s
INFO: 13050 processes: 12978 remote cache hit, 72 remote.
INFO: Build completed, 1 test FAILED, 14938 total actions
//cluster:common_test                                           (cached) PASSED in 11.3s
//cluster:kube-util_test                                        (cached) PASSED in 0.4s
//cluster/gce/cos:go_default_test                               (cached) PASSED in 1.6s
//cluster/gce/custom:go_default_test                            (cached) PASSED in 2.0s
//cluster/gce/gci:go_default_test                               (cached) PASSED in 2.0s
//cluster/gce/ubuntu:go_default_test                            (cached) PASSED in 2.0s
... skipping 928 lines ...
//third_party/forked/gonum/graph/simple:go_default_test         (cached) PASSED in 0.0s
//pkg/kubelet/pod:go_default_test                                        PASSED in 5.8s
//pkg/kubelet/prober:go_default_test                                     PASSED in 15.8s
//pkg/kubelet/status:go_default_test                                     PASSED in 5.9s
//pkg/kubelet/volumemanager:go_default_test                              PASSED in 12.1s
//pkg/kubelet/volumemanager/populator:go_default_test                    PASSED in 6.3s
//pkg/kubelet:go_default_test                                            FAILED in 3 out of 3 in 17.8s
  Stats over 3 runs: max = 17.8s, min = 15.5s, avg = 16.4s, dev = 1.0s
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test.log
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_1.log
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/test_attempts/attempt_2.log

Executed 6 out of 941 tests: 940 tests pass and 1 fails remotely.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
INFO: Build completed, 1 test FAILED, 14938 total actions
+ ../test-infra/hack/coalesce.py
+ exit 3