This job view page is being replaced by Spyglass soon. Check out the new job view.
PRnolancon: [WIP] Updated - topologymanager: Add Merge method to Policy
ResultFAILURE
Tests 1 failed / 895 succeeded
Started2019-12-03 12:32
Elapsed7m24s
Revision54e5b088d1ded78d27325f61aa4c607033c0cee2
Refs 85798

Test Failures


//pkg/kubelet/cm/topologymanager:go_default_test 0.00s

bazel test //pkg/kubelet/cm/topologymanager:go_default_test
exec ${PAGER:-/usr/bin/less} "$0" || exit 1
Executing tests from //pkg/kubelet/cm/topologymanager:go_default_test
-----------------------------------------------------------------------------
I1203 12:38:19.017711      18 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
I1203 12:38:19.018368      18 fake_topology_manager.go:34] [fake topologymanager] GetAffinity podUID: 0aafa4c4-38e8-11e9-bcb1-a4bf01040474 container name:  nginx
I1203 12:38:19.018791      18 fake_topology_manager.go:43] [fake topologymanager] AddContainer  pod: &Pod{ObjectMeta:{    0aafa4c4-38e8-11e9-bcb1-a4bf01040474  0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{},RestartPolicy:,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{},HostAliases:[]HostAlias{},PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} container id:  nginx
I1203 12:38:19.019237      18 fake_topology_manager.go:43] [fake topologymanager] AddContainer  pod: &Pod{ObjectMeta:{    b3ee37fc-39a5-11e9-bcb1-a4bf01040474  0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{},RestartPolicy:,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{},HostAliases:[]HostAlias{},PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} container id:  Busy_Box
I1203 12:38:19.019890      18 fake_topology_manager.go:48] [fake topologymanager] RemoveContainer container id:  nginx
I1203 12:38:19.019937      18 fake_topology_manager.go:48] [fake topologymanager] RemoveContainer container id:  Busy_Box
I1203 12:38:19.020291      18 fake_topology_manager.go:53] [fake topologymanager] Topology Admit Handler
I1203 12:38:19.020388      18 fake_topology_manager.go:53] [fake topologymanager] Topology Admit Handler
I1203 12:38:19.020428      18 fake_topology_manager.go:53] [fake topologymanager] Topology Admit Handler
I1203 12:38:19.021366      18 policy_best_effort.go:130] [topologymanager] Hint Provider has no preference for NUMA affinity with any resource
I1203 12:38:19.021415      18 policy_best_effort.go:138] [topologymanager] Hint Provider has no preference for NUMA affinity with resource 'resource'
I1203 12:38:19.021533      18 policy_best_effort.go:144] [topologymanager] Hint Provider has no possible NUMA affinities for resource 'resource'
I1203 12:38:19.021645      18 policy_best_effort.go:130] [topologymanager] Hint Provider has no preference for NUMA affinity with any resource
I1203 12:38:19.021676      18 policy_best_effort.go:130] [topologymanager] Hint Provider has no preference for NUMA affinity with any resource
I1203 12:38:19.023019      18 policy_best_effort.go:130] [topologymanager] Hint Provider has no preference for NUMA affinity with any resource
I1203 12:38:19.023070      18 policy_best_effort.go:138] [topologymanager] Hint Provider has no preference for NUMA affinity with resource 'resource'
I1203 12:38:19.023184      18 policy_best_effort.go:144] [topologymanager] Hint Provider has no possible NUMA affinities for resource 'resource'
I1203 12:38:19.023280      18 policy_best_effort.go:130] [topologymanager] Hint Provider has no preference for NUMA affinity with any resource
I1203 12:38:19.023308      18 policy_best_effort.go:130] [topologymanager] Hint Provider has no preference for NUMA affinity with any resource
I1203 12:38:19.024793      18 policy_single_numa_node.go:162] [topologymanager] No preference for socket affinity from all providers
I1203 12:38:19.024837      18 policy_single_numa_node.go:135] [topologymanager] Hint Provider has no preference for socket affinity with any resource, skipping.
I1203 12:38:19.024945      18 policy_single_numa_node.go:162] [topologymanager] No preference for socket affinity from all providers
I1203 12:38:19.024982      18 policy_single_numa_node.go:143] [topologymanager] Hint Provider has no preference for socket affinity with resource 'resource', skipping.
I1203 12:38:19.025015      18 policy_single_numa_node.go:162] [topologymanager] No preference for socket affinity from all providers
I1203 12:38:19.025042      18 policy_single_numa_node.go:149] [topologymanager] Hint Provider has no possible socket affinities for resource 'resource'
I1203 12:38:19.025069      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource': [{<nil> true}]
I1203 12:38:19.025200      18 policy_single_numa_node.go:175] [topologymanager] No hints that align to a single socket for resource.
I1203 12:38:19.025367      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource': [{<nil> false}]
I1203 12:38:19.025416      18 policy_single_numa_node.go:175] [topologymanager] No hints that align to a single socket for resource.
I1203 12:38:19.025463      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.025545      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.025610      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000001 true}
I1203 12:38:19.025682      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000010 true}]
I1203 12:38:19.025739      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000010 true}]
I1203 12:38:19.025797      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000010 true}
I1203 12:38:19.025847      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.025894      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000010 true}]
I1203 12:38:19.025961      18 policy_single_numa_node.go:185] [topologymanager] Strict Policy no match: {0000000000000000000000000000000000000000000000000000000000000011 false}
I1203 12:38:19.026014      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.026061      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000001 false}]
I1203 12:38:19.026117      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000001 true}
I1203 12:38:19.026211      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000010 true}]
I1203 12:38:19.026263      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000010 false}]
I1203 12:38:19.026325      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000010 true}
I1203 12:38:19.026411      18 policy_single_numa_node.go:135] [topologymanager] Hint Provider has no preference for socket affinity with any resource, skipping.
I1203 12:38:19.026435      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource': [{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.026504      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000001 true}
I1203 12:38:19.026550      18 policy_single_numa_node.go:135] [topologymanager] Hint Provider has no preference for socket affinity with any resource, skipping.
I1203 12:38:19.026580      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource': [{0000000000000000000000000000000000000000000000000000000000000010 true}]
I1203 12:38:19.026629      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000010 true}
I1203 12:38:19.026682      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true}]
I1203 12:38:19.026748      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.026816      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000001 true}
I1203 12:38:19.026864      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true}]
I1203 12:38:19.026953      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000010 true}]
I1203 12:38:19.027022      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000010 true}
I1203 12:38:19.027081      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true}]
I1203 12:38:19.027152      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:19.027214      18 policy_single_numa_node.go:175] [topologymanager] No hints that align to a single socket for resource.
I1203 12:38:19.027318      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true}]
I1203 12:38:19.027394      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:19.027504      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000001 true}
I1203 12:38:19.027569      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:19.027650      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:19.027774      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000010 true}
I1203 12:38:19.027829      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:19.027900      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:19.027999      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000010 true}
I1203 12:38:19.028043      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000011 true}]
I1203 12:38:19.028102      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:19.028195      18 policy_single_numa_node.go:175] [topologymanager] No hints that align to a single socket for resource.
I1203 12:38:19.028286      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:19.028365      18 policy_single_numa_node.go:135] [topologymanager] Hint Provider has no preference for socket affinity with any resource, skipping.
I1203 12:38:19.028400      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000001 true}
--- FAIL: TestPolicySingleNumaNodeMerge (0.00s)
    policy_test.go:665: Single TopologyHint with Preferred as true and NUMANodeAffinity as nil: Expected Affinity preference in result to be true, got false
    policy_test.go:665: Two providers, 1 hint each, same mask, 1 preferred, 1 not 1/2: Expected Affinity preference in result to be false, got true
    policy_test.go:665: Two providers, 1 hint each, same mask, 1 preferred, 1 not 2/2: Expected Affinity preference in result to be false, got true
    policy_test.go:662: Two providers, 1 with 2 hints, 1 with single non-preferred hint matching: Expected NUMANodeAffinity in result to be 0000000000000000000000000000000000000000000000000000000000000001, got 0000000000000000000000000000000000000000000000000000000000000011
    policy_test.go:662: Single NUMA hint generation: Expected NUMANodeAffinity in result to be 0000000000000000000000000000000000000000000000000000000000000001, got 0000000000000000000000000000000000000000000000000000000000000011
I1203 12:38:19.028736      18 topology_manager.go:120] [topologymanager] Creating topology manager with best-effort policy
I1203 12:38:19.028779      18 topology_manager.go:120] [topologymanager] Creating topology manager with restricted policy
I1203 12:38:19.028807      18 topology_manager.go:120] [topologymanager] Creating topology manager with unknown policy
I1203 12:38:19.029386      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[]
I1203 12:38:19.029447      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[]]
I1203 12:38:19.029516      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource1:[{<nil> false}]]
I1203 12:38:19.029564      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource2:[{<nil> false}]]
I1203 12:38:19.029653      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource1:[{<nil> false}]]
I1203 12:38:19.029698      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[]
I1203 12:38:19.029746      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource1:[{<nil> false}]]
I1203 12:38:19.029788      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[]
I1203 12:38:19.029835      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource1:[{<nil> false}] resource2:[{<nil> false}]]
I1203 12:38:19.030202      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {<nil> false}
I1203 12:38:19.030265      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[]
I1203 12:38:19.030312      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {<nil> false}
I1203 12:38:19.030357      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[]]
I1203 12:38:19.030396      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {<nil> false}
I1203 12:38:19.030456      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource-1/A:[{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000011 false}] resource-1/B:[{0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000110 false}]]
I1203 12:38:19.030614      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource-2/A:[{0000000000000000000000000000000000000000000000000000000000000100 true} {0000000000000000000000000000000000000000000000000000000000011000 false}] resource-2/B:[{0000000000000000000000000000000000000000000000000000000000000100 true} {0000000000000000000000000000000000000000000000000000000000011000 false}]]
I1203 12:38:19.030739      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource-3:[]]
I1203 12:38:19.030794      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {<nil> false}
I1203 12:38:19.031344      18 topology_manager.go:199] [topologymanager] RemoveContainer - Container ID: nginx podTopologyHints: map[]
I1203 12:38:19.031401      18 topology_manager.go:199] [topologymanager] RemoveContainer - Container ID: Busy_Box podTopologyHints: map[]
I1203 12:38:19.032041      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.032082      18 topology_manager.go:206] [topologymanager] Skipping calculate topology affinity as policy: none
I1203 12:38:19.032131      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.032157      18 topology_manager.go:206] [topologymanager] Skipping calculate topology affinity as policy: none
I1203 12:38:19.032184      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.032215      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[]
I1203 12:38:19.032255      18 policy_best_effort.go:130] [topologymanager] Hint Provider has no preference for NUMA affinity with any resource
I1203 12:38:19.032283      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000011 true}
I1203 12:38:19.032347      18 topology_manager.go:222] [topologymanager] Topology Affinity for Pod:  are map[:{0000000000000000000000000000000000000000000000000000000000000011 true}]
I1203 12:38:19.032414      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.032443      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[]
I1203 12:38:19.032475      18 policy_best_effort.go:130] [topologymanager] Hint Provider has no preference for NUMA affinity with any resource
I1203 12:38:19.032564      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000011 true}
I1203 12:38:19.032669      18 topology_manager.go:222] [topologymanager] Topology Affinity for Pod:  are map[:{0000000000000000000000000000000000000000000000000000000000000011 true}]
I1203 12:38:19.032743      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.032778      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:19.033052      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000001 true}
I1203 12:38:19.033108      18 topology_manager.go:222] [topologymanager] Topology Affinity for Pod:  are map[:{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.033184      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.033215      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:19.033338      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000001 true}
I1203 12:38:19.033393      18 topology_manager.go:222] [topologymanager] Topology Affinity for Pod:  are map[:{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.033453      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.033480      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:19.033600      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000001 true}
I1203 12:38:19.033706      18 topology_manager.go:222] [topologymanager] Topology Affinity for Pod:  are map[:{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.033774      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.033803      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:19.033888      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000011 false}
I1203 12:38:19.033943      18 topology_manager.go:222] [topologymanager] Topology Affinity for Pod:  are map[:{0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:19.034009      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.034047      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:19.034131      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000001 true}
I1203 12:38:19.034179      18 topology_manager.go:222] [topologymanager] Topology Affinity for Pod:  are map[:{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.034243      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.034277      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:19.034365      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000001 true}
I1203 12:38:19.034413      18 topology_manager.go:222] [topologymanager] Topology Affinity for Pod:  are map[:{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.034479      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.034517      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:19.034638      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000001 true}
I1203 12:38:19.034695      18 topology_manager.go:222] [topologymanager] Topology Affinity for Pod:  are map[:{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.034751      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.034784      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:19.034902      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000001 true}
I1203 12:38:19.034947      18 topology_manager.go:222] [topologymanager] Topology Affinity for Pod:  are map[:{0000000000000000000000000000000000000000000000000000000000000001 true}]
I1203 12:38:19.035004      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.035036      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:19.035104      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000011 false}
I1203 12:38:19.035153      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.035178      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:19.035243      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000011 false}
FAIL

				from junit_bazel.xml

Find , mentions in log files | View test history on testgrid


Show 895 Passed Tests

Error lines from build-log.txt

... skipping 41 lines ...
[7,504 / 10,427] 382 / 896 tests; GoCompilePkg staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/linux_amd64_race_stripped/go_default_test%/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/apis/meta/v1.a; 28s remote ... (135 actions, 134 running)
[8,300 / 10,525] 505 / 896 tests; GoCompilePkg test/images/agnhost/webhook/linux_amd64_race_stripped/go_default_test%/k8s.io/kubernetes/test/images/agnhost/webhook.a; 48s remote ... (161 actions running)
[9,281 / 10,565] 521 / 896 tests; Listing all stable metrics. //test/instrumentation:list_stable_metrics; 75s remote ... (264 actions, 263 running)
[9,698 / 10,635] 576 / 896 tests; Listing all stable metrics. //test/instrumentation:list_stable_metrics; 104s remote ... (230 actions running)
[10,225 / 10,740] 719 / 896 tests; GoLink staging/src/k8s.io/apiserver/pkg/server/filters/linux_amd64_race_stripped/go_default_test; 98s remote ... (115 actions running)
[10,725 / 10,869] 848 / 896 tests; GoLink pkg/controller/volume/attachdetach/metrics/linux_amd64_race_stripped/go_default_test; 72s remote ... (33 actions running)
FAIL: //pkg/kubelet/cm/topologymanager:go_default_test (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test_attempts/attempt_1.log)
[10,894 / 10,907] 883 / 896 tests; GoLink pkg/kubelet/server/linux_amd64_race_stripped/go_default_test; 37s remote ... (13 actions, 12 running)
FAIL: //pkg/kubelet/cm/topologymanager:go_default_test (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test_attempts/attempt_2.log)

FAILED: //pkg/kubelet/cm/topologymanager:go_default_test (Summary)
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test.log
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test.log
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test_attempts/attempt_1.log
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test.log
      /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test_attempts/attempt_2.log
FAIL: //pkg/kubelet/cm/topologymanager:go_default_test (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test.log)
INFO: From Testing //pkg/kubelet/cm/topologymanager:go_default_test:
==================== Test output for //pkg/kubelet/cm/topologymanager:go_default_test:
I1203 12:38:09.044667      17 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
I1203 12:38:09.045340      17 fake_topology_manager.go:34] [fake topologymanager] GetAffinity podUID: 0aafa4c4-38e8-11e9-bcb1-a4bf01040474 container name:  nginx
I1203 12:38:09.045786      17 fake_topology_manager.go:43] [fake topologymanager] AddContainer  pod: &Pod{ObjectMeta:{    0aafa4c4-38e8-11e9-bcb1-a4bf01040474  0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{},RestartPolicy:,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{},HostAliases:[]HostAlias{},PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} container id:  nginx
I1203 12:38:09.046253      17 fake_topology_manager.go:43] [fake topologymanager] AddContainer  pod: &Pod{ObjectMeta:{    b3ee37fc-39a5-11e9-bcb1-a4bf01040474  0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{},RestartPolicy:,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{},HostAliases:[]HostAlias{},PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} container id:  Busy_Box
... skipping 64 lines ...
I1203 12:38:09.054428      17 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000011 true}]
I1203 12:38:09.054471      17 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:09.054572      17 policy_single_numa_node.go:175] [topologymanager] No hints that align to a single socket for resource.
I1203 12:38:09.054673      17 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:09.054765      17 policy_single_numa_node.go:135] [topologymanager] Hint Provider has no preference for socket affinity with any resource, skipping.
I1203 12:38:09.054829      17 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000001 true}
--- FAIL: TestPolicySingleNumaNodeMerge (0.00s)
    policy_test.go:665: Single TopologyHint with Preferred as true and NUMANodeAffinity as nil: Expected Affinity preference in result to be true, got false
    policy_test.go:665: Two providers, 1 hint each, same mask, 1 preferred, 1 not 1/2: Expected Affinity preference in result to be false, got true
    policy_test.go:665: Two providers, 1 hint each, same mask, 1 preferred, 1 not 2/2: Expected Affinity preference in result to be false, got true
    policy_test.go:662: Two providers, 1 with 2 hints, 1 with single non-preferred hint matching: Expected NUMANodeAffinity in result to be 0000000000000000000000000000000000000000000000000000000000000001, got 0000000000000000000000000000000000000000000000000000000000000011
    policy_test.go:662: Single NUMA hint generation: Expected NUMANodeAffinity in result to be 0000000000000000000000000000000000000000000000000000000000000001, got 0000000000000000000000000000000000000000000000000000000000000011
I1203 12:38:09.055164      17 topology_manager.go:120] [topologymanager] Creating topology manager with best-effort policy
... skipping 68 lines ...
I1203 12:38:09.061280      17 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:09.061317      17 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:09.061568      17 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000011 false}
I1203 12:38:09.061639      17 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:09.061676      17 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:09.061747      17 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000011 false}
FAIL
================================================================================
==================== Test output for //pkg/kubelet/cm/topologymanager:go_default_test:
I1203 12:38:14.990457      17 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
I1203 12:38:14.991409      17 fake_topology_manager.go:34] [fake topologymanager] GetAffinity podUID: 0aafa4c4-38e8-11e9-bcb1-a4bf01040474 container name:  nginx
I1203 12:38:14.992046      17 fake_topology_manager.go:43] [fake topologymanager] AddContainer  pod: &Pod{ObjectMeta:{    0aafa4c4-38e8-11e9-bcb1-a4bf01040474  0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{},RestartPolicy:,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{},HostAliases:[]HostAlias{},PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} container id:  nginx
I1203 12:38:14.992646      17 fake_topology_manager.go:43] [fake topologymanager] AddContainer  pod: &Pod{ObjectMeta:{    b3ee37fc-39a5-11e9-bcb1-a4bf01040474  0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{},RestartPolicy:,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{},HostAliases:[]HostAlias{},PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} container id:  Busy_Box
... skipping 64 lines ...
I1203 12:38:15.003041      17 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000011 true}]
I1203 12:38:15.003086      17 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:15.003183      17 policy_single_numa_node.go:175] [topologymanager] No hints that align to a single socket for resource.
I1203 12:38:15.003301      17 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:15.003399      17 policy_single_numa_node.go:135] [topologymanager] Hint Provider has no preference for socket affinity with any resource, skipping.
I1203 12:38:15.003442      17 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000001 true}
--- FAIL: TestPolicySingleNumaNodeMerge (0.00s)
    policy_test.go:665: Single TopologyHint with Preferred as true and NUMANodeAffinity as nil: Expected Affinity preference in result to be true, got false
    policy_test.go:665: Two providers, 1 hint each, same mask, 1 preferred, 1 not 1/2: Expected Affinity preference in result to be false, got true
    policy_test.go:665: Two providers, 1 hint each, same mask, 1 preferred, 1 not 2/2: Expected Affinity preference in result to be false, got true
    policy_test.go:662: Two providers, 1 with 2 hints, 1 with single non-preferred hint matching: Expected NUMANodeAffinity in result to be 0000000000000000000000000000000000000000000000000000000000000001, got 0000000000000000000000000000000000000000000000000000000000000011
    policy_test.go:662: Single NUMA hint generation: Expected NUMANodeAffinity in result to be 0000000000000000000000000000000000000000000000000000000000000001, got 0000000000000000000000000000000000000000000000000000000000000011
I1203 12:38:15.003936      17 topology_manager.go:120] [topologymanager] Creating topology manager with best-effort policy
... skipping 68 lines ...
I1203 12:38:15.011929      17 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:15.011966      17 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:15.012053      17 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000011 false}
I1203 12:38:15.012112      17 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:15.012137      17 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:15.012217      17 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000011 false}
FAIL
================================================================================
==================== Test output for //pkg/kubelet/cm/topologymanager:go_default_test:
I1203 12:38:19.017711      18 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager
I1203 12:38:19.018368      18 fake_topology_manager.go:34] [fake topologymanager] GetAffinity podUID: 0aafa4c4-38e8-11e9-bcb1-a4bf01040474 container name:  nginx
I1203 12:38:19.018791      18 fake_topology_manager.go:43] [fake topologymanager] AddContainer  pod: &Pod{ObjectMeta:{    0aafa4c4-38e8-11e9-bcb1-a4bf01040474  0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{},RestartPolicy:,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{},HostAliases:[]HostAlias{},PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} container id:  nginx
I1203 12:38:19.019237      18 fake_topology_manager.go:43] [fake topologymanager] AddContainer  pod: &Pod{ObjectMeta:{    b3ee37fc-39a5-11e9-bcb1-a4bf01040474  0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{},RestartPolicy:,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{},HostAliases:[]HostAlias{},PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} container id:  Busy_Box
... skipping 64 lines ...
I1203 12:38:19.028043      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000011 true}]
I1203 12:38:19.028102      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource2': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:19.028195      18 policy_single_numa_node.go:175] [topologymanager] No hints that align to a single socket for resource.
I1203 12:38:19.028286      18 policy_single_numa_node.go:154] [topologymanager] TopologyHints for resource 'resource1': [{0000000000000000000000000000000000000000000000000000000000000001 true} {0000000000000000000000000000000000000000000000000000000000000010 true} {0000000000000000000000000000000000000000000000000000000000000011 false}]
I1203 12:38:19.028365      18 policy_single_numa_node.go:135] [topologymanager] Hint Provider has no preference for socket affinity with any resource, skipping.
I1203 12:38:19.028400      18 policy_single_numa_node.go:182] [topologymanager] Strict Policy match: {0000000000000000000000000000000000000000000000000000000000000001 true}
--- FAIL: TestPolicySingleNumaNodeMerge (0.00s)
    policy_test.go:665: Single TopologyHint with Preferred as true and NUMANodeAffinity as nil: Expected Affinity preference in result to be true, got false
    policy_test.go:665: Two providers, 1 hint each, same mask, 1 preferred, 1 not 1/2: Expected Affinity preference in result to be false, got true
    policy_test.go:665: Two providers, 1 hint each, same mask, 1 preferred, 1 not 2/2: Expected Affinity preference in result to be false, got true
    policy_test.go:662: Two providers, 1 with 2 hints, 1 with single non-preferred hint matching: Expected NUMANodeAffinity in result to be 0000000000000000000000000000000000000000000000000000000000000001, got 0000000000000000000000000000000000000000000000000000000000000011
    policy_test.go:662: Single NUMA hint generation: Expected NUMANodeAffinity in result to be 0000000000000000000000000000000000000000000000000000000000000001, got 0000000000000000000000000000000000000000000000000000000000000011
I1203 12:38:19.028736      18 topology_manager.go:120] [topologymanager] Creating topology manager with best-effort policy
... skipping 68 lines ...
I1203 12:38:19.035004      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.035036      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:19.035104      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000011 false}
I1203 12:38:19.035153      18 topology_manager.go:204] [topologymanager] Topology Admit Handler
I1203 12:38:19.035178      18 topology_manager.go:173] [topologymanager] TopologyHints for pod '', container '': map[resource:[{0000000000000000000000000000000000000000000000000000000000000011 false}]]
I1203 12:38:19.035243      18 topology_manager.go:182] [topologymanager] ContainerTopologyHint: {0000000000000000000000000000000000000000000000000000000000000011 false}
FAIL
================================================================================
[10,914 / 10,915] 895 / 896 tests, 1 failed; GoLink cmd/genkubedocs/linux_amd64_race_stripped/go_default_test; 93s remote
INFO: Elapsed time: 440.909s, Critical Path: 377.01s
INFO: 10000 processes: 9934 remote cache hit, 66 remote.
INFO: Build completed, 1 test FAILED, 10916 total actions
//cluster:clientbin_test                                        (cached) PASSED in 0.6s
//cluster:common_test                                           (cached) PASSED in 0.2s
//cluster:kube-util_test                                        (cached) PASSED in 0.2s
//cluster/gce/cos:go_default_test                               (cached) PASSED in 0.1s
//cluster/gce/custom:go_default_test                            (cached) PASSED in 0.1s
//cluster/gce/gci:go_default_test                               (cached) PASSED in 0.1s
... skipping 883 lines ...
//pkg/kubelet/metrics/collectors:go_default_test                         PASSED in 4.9s
//pkg/kubelet/nodestatus:go_default_test                                 PASSED in 5.0s
//pkg/kubelet/preemption:go_default_test                                 PASSED in 5.2s
//pkg/kubelet/server:go_default_test                                     PASSED in 7.1s
//pkg/kubelet/server/stats:go_default_test                               PASSED in 4.7s
//pkg/kubelet/stats:go_default_test                                      PASSED in 5.2s
//pkg/kubelet/cm/topologymanager:go_default_test                         FAILED in 5 out of 5 in 5.1s
  Stats over 3 runs: max = 5.1s, min = 3.5s, avg = 4.3s, dev = 0.6s
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test.log
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test.log
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test_attempts/attempt_1.log
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test.log
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/cm/topologymanager/go_default_test/test_attempts/attempt_2.log

Executed 17 out of 896 tests: 895 tests pass and 1 fails remotely.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
INFO: Build completed, 1 test FAILED, 10916 total actions
+ ../test-infra/hack/coalesce.py
+ exit 3