This job view page is being replaced by Spyglass soon. Check out the new job view.
PRhoujun41544: Check for pvcVolume with IsOperationPending() before markPVCResizeInProgress()
ResultFAILURE
Tests 1 failed / 622 succeeded
Started2019-02-11 18:33
Elapsed27m39s
Revision
Buildergke-prow-containerd-pool-99179761-k7t7
Refs master:f7c4389b
67630:a5cde1a4
pod7158efb5-2e2b-11e9-ad96-0a580a6c081a
infra-commit60e8a7562
pod7158efb5-2e2b-11e9-ad96-0a580a6c081a
repok8s.io/kubernetes
repo-commit05ebc548982a4e57fde9c7858149a5bb98eeeda5
repos{u'k8s.io/kubernetes': u'master:f7c4389b793cd6cf0de8d67f2c5db28b3985ad59,67630:a5cde1a463d49fe6f793d49a74ed7f13e2f1c4a3'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 19s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0211 18:53:33.587991  123569 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0211 18:53:33.588017  123569 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0211 18:53:33.588026  123569 master.go:272] Node port range unspecified. Defaulting to 30000-32767.
I0211 18:53:33.588042  123569 master.go:228] Using reconciler: 
I0211 18:53:33.589547  123569 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.589683  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.589710  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.589750  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.589806  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.591210  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.591310  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.591560  123569 store.go:1310] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0211 18:53:33.591622  123569 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.591649  123569 reflector.go:170] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0211 18:53:33.593144  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.593188  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.593248  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.593311  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.593694  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.593760  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.593825  123569 store.go:1310] Monitoring events count at <storage-prefix>//events
I0211 18:53:33.593934  123569 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.594018  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.594041  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.594072  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.594114  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.594693  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.594792  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.595069  123569 store.go:1310] Monitoring limitranges count at <storage-prefix>//limitranges
I0211 18:53:33.595107  123569 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.595158  123569 reflector.go:170] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0211 18:53:33.595208  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.595221  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.595258  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.595392  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.595706  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.595789  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.596099  123569 store.go:1310] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0211 18:53:33.596301  123569 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.596333  123569 reflector.go:170] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0211 18:53:33.596394  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.596417  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.596461  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.596744  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.597072  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.597263  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.597468  123569 store.go:1310] Monitoring secrets count at <storage-prefix>//secrets
I0211 18:53:33.597623  123569 reflector.go:170] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0211 18:53:33.597756  123569 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.597841  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.597879  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.597918  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.597972  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.598205  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.598741  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.599329  123569 store.go:1310] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0211 18:53:33.599413  123569 reflector.go:170] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0211 18:53:33.599495  123569 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.599587  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.600880  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.600983  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.601090  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.601762  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.601868  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.602198  123569 store.go:1310] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0211 18:53:33.602227  123569 reflector.go:170] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0211 18:53:33.602391  123569 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.602479  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.602520  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.602760  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.602826  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.603103  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.603195  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.603737  123569 store.go:1310] Monitoring configmaps count at <storage-prefix>//configmaps
I0211 18:53:33.603802  123569 reflector.go:170] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0211 18:53:33.604166  123569 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.604280  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.604810  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.604865  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.604912  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.605376  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.605435  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.605624  123569 store.go:1310] Monitoring namespaces count at <storage-prefix>//namespaces
I0211 18:53:33.605776  123569 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.605857  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.605879  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.605922  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.605974  123569 reflector.go:170] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0211 18:53:33.606166  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.606503  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.606564  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.606791  123569 store.go:1310] Monitoring endpoints count at <storage-prefix>//endpoints
I0211 18:53:33.606923  123569 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.606990  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.607003  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.607032  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.607077  123569 reflector.go:170] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0211 18:53:33.607197  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.607443  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.607491  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.607794  123569 store.go:1310] Monitoring nodes count at <storage-prefix>//nodes
I0211 18:53:33.607816  123569 reflector.go:170] Listing and watching *core.Node from storage/cacher.go:/nodes
I0211 18:53:33.607933  123569 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.608002  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.608015  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.608044  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.608080  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.608549  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.608577  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.608893  123569 store.go:1310] Monitoring pods count at <storage-prefix>//pods
I0211 18:53:33.609012  123569 reflector.go:170] Listing and watching *core.Pod from storage/cacher.go:/pods
I0211 18:53:33.609047  123569 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.609136  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.609161  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.609225  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.609271  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.609484  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.609797  123569 store.go:1310] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0211 18:53:33.609877  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.609915  123569 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.609969  123569 reflector.go:170] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0211 18:53:33.609978  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.609989  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.610020  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.610123  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.610682  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.610998  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.611013  123569 store.go:1310] Monitoring services count at <storage-prefix>//services
I0211 18:53:33.611049  123569 reflector.go:170] Listing and watching *core.Service from storage/cacher.go:/services
I0211 18:53:33.611041  123569 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.611122  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.611133  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.611163  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.611235  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.611528  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.611661  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.611681  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.611713  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.611785  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.611811  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.612078  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.612253  123569 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.612322  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.612334  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.612371  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.612451  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.612479  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.612819  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.612853  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.613253  123569 store.go:1310] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0211 18:53:33.613324  123569 reflector.go:170] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0211 18:53:33.625325  123569 master.go:407] Skipping disabled API group "auditregistration.k8s.io".
I0211 18:53:33.625369  123569 master.go:415] Enabling API group "authentication.k8s.io".
I0211 18:53:33.625387  123569 master.go:415] Enabling API group "authorization.k8s.io".
I0211 18:53:33.625512  123569 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.625656  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.625681  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.625732  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.625797  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.626281  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.626338  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.626597  123569 store.go:1310] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0211 18:53:33.626690  123569 reflector.go:170] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0211 18:53:33.626802  123569 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.626898  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.626912  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.626943  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.626984  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.627559  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.627704  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.627955  123569 store.go:1310] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0211 18:53:33.627982  123569 reflector.go:170] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0211 18:53:33.628203  123569 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.628323  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.628345  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.628417  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.628649  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.629536  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.629767  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.630429  123569 store.go:1310] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0211 18:53:33.630460  123569 master.go:415] Enabling API group "autoscaling".
I0211 18:53:33.630484  123569 reflector.go:170] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0211 18:53:33.630620  123569 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.630714  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.630726  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.630759  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.630827  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.631222  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.631332  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.631581  123569 store.go:1310] Monitoring jobs.batch count at <storage-prefix>//jobs
I0211 18:53:33.631832  123569 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.631944  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.631981  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.632027  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.631958  123569 reflector.go:170] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0211 18:53:33.632089  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.633733  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.633822  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.633981  123569 store.go:1310] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0211 18:53:33.634008  123569 master.go:415] Enabling API group "batch".
I0211 18:53:33.634084  123569 reflector.go:170] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0211 18:53:33.634298  123569 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.634556  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.634593  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.634654  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.634723  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.634990  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.635036  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.635424  123569 store.go:1310] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0211 18:53:33.635454  123569 master.go:415] Enabling API group "certificates.k8s.io".
I0211 18:53:33.635523  123569 reflector.go:170] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0211 18:53:33.635577  123569 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.635679  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.635692  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.635720  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.635755  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.635989  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.636312  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.636326  123569 store.go:1310] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0211 18:53:33.636346  123569 reflector.go:170] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0211 18:53:33.636480  123569 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.636563  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.636578  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.636634  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.636696  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.636899  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.636963  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.637116  123569 store.go:1310] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0211 18:53:33.637147  123569 master.go:415] Enabling API group "coordination.k8s.io".
I0211 18:53:33.637302  123569 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.637392  123569 reflector.go:170] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0211 18:53:33.637410  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.637550  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.637700  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.637758  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.638063  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.638113  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.638309  123569 store.go:1310] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0211 18:53:33.638350  123569 reflector.go:170] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0211 18:53:33.638438  123569 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.638510  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.638520  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.638549  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.638590  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.638811  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.638924  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.639134  123569 store.go:1310] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0211 18:53:33.639203  123569 reflector.go:170] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0211 18:53:33.639291  123569 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.639381  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.639397  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.639454  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.639487  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.639751  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.639790  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.640131  123569 store.go:1310] Monitoring deployments.apps count at <storage-prefix>//deployments
I0211 18:53:33.640285  123569 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.640356  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.640356  123569 reflector.go:170] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0211 18:53:33.640372  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.640397  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.640446  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.640704  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.640785  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.640931  123569 store.go:1310] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0211 18:53:33.640989  123569 reflector.go:170] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0211 18:53:33.641053  123569 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.641130  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.641141  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.641194  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.641231  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.642043  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.642131  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.642432  123569 store.go:1310] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0211 18:53:33.642552  123569 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.642617  123569 reflector.go:170] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0211 18:53:33.642639  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.642652  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.642678  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.642719  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.642980  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.643080  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.643240  123569 store.go:1310] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0211 18:53:33.643328  123569 reflector.go:170] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0211 18:53:33.643364  123569 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.643429  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.643441  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.643468  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.643497  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.643725  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.643927  123569 store.go:1310] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0211 18:53:33.643943  123569 master.go:415] Enabling API group "extensions".
I0211 18:53:33.644031  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.644065  123569 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.644137  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.644149  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.644221  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.644274  123569 reflector.go:170] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0211 18:53:33.644426  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.644624  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.644738  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.644840  123569 store.go:1310] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0211 18:53:33.644856  123569 master.go:415] Enabling API group "networking.k8s.io".
I0211 18:53:33.644968  123569 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.645034  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.645045  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.645070  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.645112  123569 reflector.go:170] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0211 18:53:33.645237  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.645420  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.645488  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.645649  123569 store.go:1310] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0211 18:53:33.645729  123569 reflector.go:170] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0211 18:53:33.645768  123569 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.645826  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.645838  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.645864  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.645894  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.646063  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.646108  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.646337  123569 store.go:1310] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0211 18:53:33.646354  123569 master.go:415] Enabling API group "policy".
I0211 18:53:33.646383  123569 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.646410  123569 reflector.go:170] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0211 18:53:33.646443  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.646454  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.646504  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.646545  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.646754  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.646917  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.646954  123569 store.go:1310] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0211 18:53:33.646984  123569 reflector.go:170] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0211 18:53:33.647067  123569 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.647126  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.647137  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.647163  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.647834  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.648226  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.648361  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.648502  123569 store.go:1310] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0211 18:53:33.648538  123569 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.648564  123569 reflector.go:170] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0211 18:53:33.648618  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.648633  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.648666  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.648768  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.649404  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.649728  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.650007  123569 store.go:1310] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0211 18:53:33.650054  123569 reflector.go:170] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0211 18:53:33.650208  123569 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.650301  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.650323  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.650363  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.650403  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.650624  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.650653  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.650900  123569 store.go:1310] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0211 18:53:33.650950  123569 reflector.go:170] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0211 18:53:33.651021  123569 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.651090  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.651101  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.651143  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.651208  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.651443  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.651647  123569 store.go:1310] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0211 18:53:33.651657  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.651721  123569 reflector.go:170] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0211 18:53:33.651757  123569 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.651829  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.651838  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.651865  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.651990  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.652295  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.652382  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.652530  123569 store.go:1310] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0211 18:53:33.652562  123569 reflector.go:170] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0211 18:53:33.652572  123569 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.652656  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.652674  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.652711  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.652750  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.652954  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.653047  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.653269  123569 store.go:1310] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0211 18:53:33.653308  123569 reflector.go:170] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0211 18:53:33.653398  123569 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.653462  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.653473  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.653500  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.653542  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.653810  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.653905  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.654103  123569 store.go:1310] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0211 18:53:33.654147  123569 master.go:415] Enabling API group "rbac.authorization.k8s.io".
I0211 18:53:33.654245  123569 reflector.go:170] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0211 18:53:33.655920  123569 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.656047  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.656073  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.656102  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.656194  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.656776  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.656845  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.657112  123569 store.go:1310] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0211 18:53:33.657133  123569 master.go:415] Enabling API group "scheduling.k8s.io".
I0211 18:53:33.657161  123569 master.go:407] Skipping disabled API group "settings.k8s.io".
I0211 18:53:33.657236  123569 reflector.go:170] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0211 18:53:33.657292  123569 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.657368  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.657377  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.657398  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.657430  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.657674  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.657788  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.657962  123569 store.go:1310] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0211 18:53:33.657992  123569 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.658004  123569 reflector.go:170] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0211 18:53:33.658085  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.658098  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.658128  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.658222  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.658484  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.658584  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.658867  123569 store.go:1310] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0211 18:53:33.658958  123569 reflector.go:170] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0211 18:53:33.659002  123569 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.659076  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.659090  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.659117  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.659244  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.659487  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.659526  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.659755  123569 store.go:1310] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0211 18:53:33.659792  123569 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.659823  123569 reflector.go:170] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0211 18:53:33.659863  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.659874  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.659900  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.659982  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.660221  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.660430  123569 store.go:1310] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0211 18:53:33.660448  123569 master.go:415] Enabling API group "storage.k8s.io".
I0211 18:53:33.660577  123569 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.660623  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.660659  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.660669  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.660694  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.660749  123569 reflector.go:170] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0211 18:53:33.660896  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.661694  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.661740  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.662193  123569 store.go:1310] Monitoring deployments.apps count at <storage-prefix>//deployments
I0211 18:53:33.662323  123569 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.662390  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.662403  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.662431  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.662510  123569 reflector.go:170] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0211 18:53:33.662565  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.662978  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.663074  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.663266  123569 store.go:1310] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0211 18:53:33.663294  123569 reflector.go:170] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0211 18:53:33.663402  123569 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.663470  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.663483  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.663510  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.663549  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.663787  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.663985  123569 store.go:1310] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0211 18:53:33.663985  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.664013  123569 reflector.go:170] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0211 18:53:33.664094  123569 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.664233  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.664247  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.664274  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.664346  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.664561  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.664590  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.664783  123569 store.go:1310] Monitoring deployments.apps count at <storage-prefix>//deployments
I0211 18:53:33.664869  123569 reflector.go:170] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0211 18:53:33.664927  123569 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.664997  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.665008  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.665037  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.665094  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.665349  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.665374  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.665585  123569 store.go:1310] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0211 18:53:33.665700  123569 reflector.go:170] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0211 18:53:33.665789  123569 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.665865  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.665877  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.665915  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.665962  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.666284  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.666522  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.666588  123569 store.go:1310] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0211 18:53:33.666648  123569 reflector.go:170] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0211 18:53:33.666758  123569 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.666839  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.666853  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.666880  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.666966  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.667198  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.667268  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.667466  123569 store.go:1310] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0211 18:53:33.667622  123569 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.667793  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.667840  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.667883  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.667928  123569 reflector.go:170] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0211 18:53:33.668069  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.668332  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.668439  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.668675  123569 store.go:1310] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0211 18:53:33.668759  123569 reflector.go:170] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0211 18:53:33.668814  123569 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.668896  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.668923  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.668972  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.669023  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.669271  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.669305  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.669540  123569 store.go:1310] Monitoring deployments.apps count at <storage-prefix>//deployments
I0211 18:53:33.669709  123569 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.669783  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.669807  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.669825  123569 reflector.go:170] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0211 18:53:33.669850  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.670000  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.670368  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.670576  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.670669  123569 store.go:1310] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0211 18:53:33.671082  123569 reflector.go:170] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0211 18:53:33.671251  123569 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.671499  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.671534  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.671584  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.671714  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.672550  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.672637  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.672937  123569 store.go:1310] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0211 18:53:33.673022  123569 reflector.go:170] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0211 18:53:33.673199  123569 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.673676  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.673724  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.673770  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.673825  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.674276  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.674899  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.674964  123569 store.go:1310] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0211 18:53:33.675094  123569 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.675154  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.675166  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.675218  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.675263  123569 reflector.go:170] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0211 18:53:33.675407  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.676827  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.677043  123569 store.go:1310] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0211 18:53:33.677055  123569 master.go:415] Enabling API group "apps".
I0211 18:53:33.677083  123569 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.677220  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.677499  123569 reflector.go:170] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0211 18:53:33.678766  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.678790  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.678826  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.678873  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.679142  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.679249  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.679418  123569 store.go:1310] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0211 18:53:33.679462  123569 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.679527  123569 reflector.go:170] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0211 18:53:33.679617  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.679638  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.679668  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.681364  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.681647  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.681910  123569 store.go:1310] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0211 18:53:33.681921  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.681935  123569 master.go:415] Enabling API group "admissionregistration.k8s.io".
I0211 18:53:33.681958  123569 reflector.go:170] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0211 18:53:33.681968  123569 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"095b9720-26db-46f8-ae16-7b8455ef396f", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0211 18:53:33.682187  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:33.682211  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:33.682245  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:33.682294  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:33.682485  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:33.682564  123569 store.go:1310] Monitoring events count at <storage-prefix>//events
I0211 18:53:33.682625  123569 master.go:415] Enabling API group "events.k8s.io".
I0211 18:53:33.682796  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:53:33.687080  123569 genericapiserver.go:330] Skipping API batch/v2alpha1 because it has no resources.
W0211 18:53:33.698001  123569 genericapiserver.go:330] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0211 18:53:33.698557  123569 genericapiserver.go:330] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0211 18:53:33.700392  123569 genericapiserver.go:330] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0211 18:53:33.712344  123569 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 18:53:33.712383  123569 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0211 18:53:33.712392  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:33.712399  123569 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 18:53:33.712404  123569 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 18:53:33.712522  123569 wrap.go:47] GET /healthz: (345.498µs) 500
goroutine 28343 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d4930a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d4930a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dca0380, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc002f59100, 0xc003899ba0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc002f59100, 0xc002f0ea00)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc002f59100, 0xc002f0ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc002f59100, 0xc002f0ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc002f59100, 0xc002f0ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc002f59100, 0xc002f0ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc002f59100, 0xc002f0ea00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc002f59100, 0xc002f0ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc002f59100, 0xc002f0ea00)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc002f59100, 0xc002f0ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc002f59100, 0xc002f0ea00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc002f59100, 0xc002f0ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc002f59100, 0xc002f0e900)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc002f59100, 0xc002f0e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007d291a0, 0xc00d5b8100, 0x60dec80, 0xc002f59100, 0xc002f0e900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:33.715515  123569 wrap.go:47] GET /api/v1/services: (1.176209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.719310  123569 wrap.go:47] GET /api/v1/services: (1.068533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.722840  123569 wrap.go:47] GET /api/v1/namespaces/default: (1.003076ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.724832  123569 wrap.go:47] POST /api/v1/namespaces: (1.514419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.726566  123569 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.037769ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.730491  123569 wrap.go:47] POST /api/v1/namespaces/default/services: (3.409882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.731820  123569 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (907.827µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.734149  123569 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.435412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.736120  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (891.139µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56302]
I0211 18:53:33.736348  123569 wrap.go:47] GET /api/v1/namespaces/default: (1.721612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.736535  123569 wrap.go:47] GET /api/v1/services: (942.603µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56306]
I0211 18:53:33.737042  123569 wrap.go:47] GET /api/v1/services: (1.388603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:33.737896  123569 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (958.505µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56306]
I0211 18:53:33.738219  123569 wrap.go:47] POST /api/v1/namespaces: (1.784905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56302]
I0211 18:53:33.739554  123569 wrap.go:47] GET /api/v1/namespaces/kube-public: (863.315µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.739713  123569 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.187769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:33.742071  123569 wrap.go:47] POST /api/v1/namespaces: (1.826998ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.743310  123569 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (893.728µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.745263  123569 wrap.go:47] POST /api/v1/namespaces: (1.387262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:33.813615  123569 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 18:53:33.813663  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:33.813675  123569 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 18:53:33.813682  123569 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 18:53:33.813899  123569 wrap.go:47] GET /healthz: (410.267µs) 500
goroutine 28196 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d479c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d479c70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01501a6a0, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc00c150400, 0xc00e28b080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc00c150400, 0xc01c03e300)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc00c150400, 0xc01c03e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc00c150400, 0xc01c03e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc00c150400, 0xc01c03e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc00c150400, 0xc01c03e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc00c150400, 0xc01c03e300)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc00c150400, 0xc01c03e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc00c150400, 0xc01c03e300)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc00c150400, 0xc01c03e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc00c150400, 0xc01c03e300)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc00c150400, 0xc01c03e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc00c150400, 0xc01c03e200)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc00c150400, 0xc01c03e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00770b140, 0xc00d5b8100, 0x60dec80, 0xc00c150400, 0xc01c03e200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:33.913587  123569 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 18:53:33.913655  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:33.913676  123569 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 18:53:33.913684  123569 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 18:53:33.913839  123569 wrap.go:47] GET /healthz: (359.701µs) 500
goroutine 28198 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d479d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d479d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01501a7a0, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc00c150428, 0xc00e28b500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc00c150428, 0xc01c03e900)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc00c150428, 0xc01c03e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc00c150428, 0xc01c03e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc00c150428, 0xc01c03e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc00c150428, 0xc01c03e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc00c150428, 0xc01c03e900)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc00c150428, 0xc01c03e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc00c150428, 0xc01c03e900)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc00c150428, 0xc01c03e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc00c150428, 0xc01c03e900)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc00c150428, 0xc01c03e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc00c150428, 0xc01c03e800)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc00c150428, 0xc01c03e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00770b2c0, 0xc00d5b8100, 0x60dec80, 0xc00c150428, 0xc01c03e800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:34.013384  123569 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 18:53:34.013424  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:34.013435  123569 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 18:53:34.013442  123569 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 18:53:34.013651  123569 wrap.go:47] GET /healthz: (332.311µs) 500
goroutine 28200 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d479e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d479e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01501a840, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc00c150430, 0xc00e28b980, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc00c150430, 0xc01c03ed00)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc00c150430, 0xc01c03ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc00c150430, 0xc01c03ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc00c150430, 0xc01c03ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc00c150430, 0xc01c03ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc00c150430, 0xc01c03ed00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc00c150430, 0xc01c03ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc00c150430, 0xc01c03ed00)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc00c150430, 0xc01c03ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc00c150430, 0xc01c03ed00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc00c150430, 0xc01c03ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc00c150430, 0xc01c03ec00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc00c150430, 0xc01c03ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00770b380, 0xc00d5b8100, 0x60dec80, 0xc00c150430, 0xc01c03ec00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:34.113382  123569 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 18:53:34.113413  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:34.113423  123569 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 18:53:34.113430  123569 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 18:53:34.113587  123569 wrap.go:47] GET /healthz: (331.921µs) 500
goroutine 28411 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d471500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d471500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dc53ae0, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc00cf9a110, 0xc013676480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc00cf9a110, 0xc00d88cf00)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc00cf9a110, 0xc00d88cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc00cf9a110, 0xc00d88cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc00cf9a110, 0xc00d88cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc00cf9a110, 0xc00d88cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc00cf9a110, 0xc00d88cf00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc00cf9a110, 0xc00d88cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc00cf9a110, 0xc00d88cf00)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc00cf9a110, 0xc00d88cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc00cf9a110, 0xc00d88cf00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc00cf9a110, 0xc00d88cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc00cf9a110, 0xc00d88ce00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc00cf9a110, 0xc00d88ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007c17bc0, 0xc00d5b8100, 0x60dec80, 0xc00cf9a110, 0xc00d88ce00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:34.213403  123569 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 18:53:34.213429  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:34.213444  123569 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 18:53:34.213450  123569 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 18:53:34.213589  123569 wrap.go:47] GET /healthz: (315.321µs) 500
goroutine 28413 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d471650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d471650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dc53d00, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc00cf9a118, 0xc013676c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d300)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d300)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d300)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d300)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d200)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc00cf9a118, 0xc00d88d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007c17ce0, 0xc00d5b8100, 0x60dec80, 0xc00cf9a118, 0xc00d88d200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:34.313449  123569 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 18:53:34.313484  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:34.313495  123569 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 18:53:34.313511  123569 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 18:53:34.313698  123569 wrap.go:47] GET /healthz: (357.764µs) 500
goroutine 28415 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d4717a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d4717a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dc53f20, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc00cf9a120, 0xc013677200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d700)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d700)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d700)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d700)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d600)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc00cf9a120, 0xc00d88d600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007c17e00, 0xc00d5b8100, 0x60dec80, 0xc00cf9a120, 0xc00d88d600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:34.413457  123569 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 18:53:34.413498  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:34.413509  123569 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 18:53:34.413517  123569 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 18:53:34.413731  123569 wrap.go:47] GET /healthz: (425.737µs) 500
goroutine 28240 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00341ff10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00341ff10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e45d260, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc004f6a4c0, 0xc003b6b500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02ea00)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02ea00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02ea00)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02ea00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02e900)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc004f6a4c0, 0xc01c02e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002fdd8c0, 0xc00d5b8100, 0x60dec80, 0xc004f6a4c0, 0xc01c02e900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:34.513595  123569 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0211 18:53:34.513647  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:34.513659  123569 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 18:53:34.513666  123569 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 18:53:34.513825  123569 wrap.go:47] GET /healthz: (364.397µs) 500
goroutine 28202 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c0e0000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c0e0000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01501aac0, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc00c150458, 0xc001e52000, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc00c150458, 0xc01c03f300)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc00c150458, 0xc01c03f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc00c150458, 0xc01c03f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc00c150458, 0xc01c03f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc00c150458, 0xc01c03f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc00c150458, 0xc01c03f300)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc00c150458, 0xc01c03f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc00c150458, 0xc01c03f300)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc00c150458, 0xc01c03f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc00c150458, 0xc01c03f300)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc00c150458, 0xc01c03f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc00c150458, 0xc01c03f200)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc00c150458, 0xc01c03f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00770b560, 0xc00d5b8100, 0x60dec80, 0xc00c150458, 0xc01c03f200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:34.587924  123569 clientconn.go:551] parsed scheme: ""
I0211 18:53:34.587965  123569 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0211 18:53:34.588014  123569 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0211 18:53:34.588106  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:34.588547  123569 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0211 18:53:34.588671  123569 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0211 18:53:34.614397  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:34.614431  123569 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 18:53:34.614440  123569 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 18:53:34.614613  123569 wrap.go:47] GET /healthz: (1.300564ms) 500
goroutine 28530 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c118000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c118000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e45d3c0, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc004f6a510, 0xc00515b4a0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc004f6a510, 0xc01c02f000)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc004f6a510, 0xc01c02f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc004f6a510, 0xc01c02f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc004f6a510, 0xc01c02f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc004f6a510, 0xc01c02f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc004f6a510, 0xc01c02f000)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc004f6a510, 0xc01c02f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc004f6a510, 0xc01c02f000)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc004f6a510, 0xc01c02f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc004f6a510, 0xc01c02f000)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc004f6a510, 0xc01c02f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc004f6a510, 0xc01c02ef00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc004f6a510, 0xc01c02ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002fddaa0, 0xc00d5b8100, 0x60dec80, 0xc004f6a510, 0xc01c02ef00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:34.714103  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:34.714155  123569 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0211 18:53:34.714164  123569 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0211 18:53:34.714340  123569 wrap.go:47] GET /healthz: (998.667µs) 500
goroutine 28477 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc018785420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc018785420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e1d85a0, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc0158ba050, 0xc007ac8c60, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc0158ba050, 0xc01c13c400)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc0158ba050, 0xc01c13c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc0158ba050, 0xc01c13c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc0158ba050, 0xc01c13c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc0158ba050, 0xc01c13c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc0158ba050, 0xc01c13c400)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc0158ba050, 0xc01c13c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc0158ba050, 0xc01c13c400)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc0158ba050, 0xc01c13c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc0158ba050, 0xc01c13c400)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc0158ba050, 0xc01c13c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc0158ba050, 0xc01c13c300)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc0158ba050, 0xc01c13c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c0344e0, 0xc00d5b8100, 0x60dec80, 0xc0158ba050, 0xc01c13c300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56342]
I0211 18:53:34.714421  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.949386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.714776  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.304601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:34.715350  123569 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.205264ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56344]
I0211 18:53:34.715840  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.022747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.717193  123569 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.513054ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:34.717268  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.038194ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.717547  123569 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.567023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56344]
I0211 18:53:34.717761  123569 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0211 18:53:34.718915  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.072393ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.718915  123569 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (979.285µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56342]
I0211 18:53:34.719361  123569 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.791957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:34.720072  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (815.344µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56342]
I0211 18:53:34.720610  123569 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.142668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.720869  123569 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0211 18:53:34.720895  123569 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0211 18:53:34.721256  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (744.645µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56342]
I0211 18:53:34.722463  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (777.937µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.723954  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (945.595µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.725230  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (825.686µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.727672  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.891236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.727875  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0211 18:53:34.728945  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (888.868µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.731266  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.938539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.731469  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0211 18:53:34.732428  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (740.658µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.734874  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.897449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.735061  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0211 18:53:34.736147  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (968.007µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.738189  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.7032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.738378  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0211 18:53:34.739151  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (656.052µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.745558  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.531229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.745821  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0211 18:53:34.747411  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.300033ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.749795  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.691177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.750102  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0211 18:53:34.752050  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.739594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.754115  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.603657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.754371  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0211 18:53:34.755870  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.205103ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.758864  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.509263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.759147  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0211 18:53:34.760434  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.02092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.763456  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.294286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.763760  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0211 18:53:34.766523  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (2.564534ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.769091  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.788094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.769325  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0211 18:53:34.770631  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.05212ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.773326  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.229942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.773663  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0211 18:53:34.775140  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.209045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.778000  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.192335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.778322  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0211 18:53:34.779747  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.176767ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.781934  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.799756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.782296  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0211 18:53:34.783739  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.22648ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.787503  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.106914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.787780  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0211 18:53:34.789026  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.084443ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.791309  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.808664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.791501  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0211 18:53:34.792619  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (881.091µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.794998  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.780582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.795240  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0211 18:53:34.796452  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (985.688µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.798667  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.753437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.798839  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0211 18:53:34.800129  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (863.062µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.802541  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.864869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.803377  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0211 18:53:34.804704  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.094359ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.807908  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.040076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.808249  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0211 18:53:34.809718  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.223642ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.812151  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.974527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.812382  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0211 18:53:34.813810  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.152168ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.814457  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:34.814703  123569 wrap.go:47] GET /healthz: (1.530244ms) 500
goroutine 28665 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c2f08c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c2f08c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015677f80, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc000b6db00, 0xc0074fd900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc000b6db00, 0xc01c320300)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc000b6db00, 0xc01c320300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc000b6db00, 0xc01c320300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc000b6db00, 0xc01c320300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc000b6db00, 0xc01c320300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc000b6db00, 0xc01c320300)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc000b6db00, 0xc01c320300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc000b6db00, 0xc01c320300)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc000b6db00, 0xc01c320300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc000b6db00, 0xc01c320300)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc000b6db00, 0xc01c320300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc000b6db00, 0xc01c320200)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc000b6db00, 0xc01c320200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c2b58c0, 0xc00d5b8100, 0x60dec80, 0xc000b6db00, 0xc01c320200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:34.816224  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.82124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.816476  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0211 18:53:34.817683  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (902.506µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.820398  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.930848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.820710  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0211 18:53:34.821988  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (1.052986ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.824392  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.922643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.825692  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0211 18:53:34.827158  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.176207ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.829451  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.751773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.829659  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0211 18:53:34.830866  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (985.629µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.833224  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.94072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.833523  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0211 18:53:34.834664  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (913.098µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.837142  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.044666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.837481  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0211 18:53:34.838835  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.054306ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.841971  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.69203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.842260  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0211 18:53:34.843665  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.170798ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.847054  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.912032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.847343  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0211 18:53:34.850045  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (2.332212ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.853500  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.961197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.853899  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0211 18:53:34.855328  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.110982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.858943  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.801727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.859387  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0211 18:53:34.860478  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (884.562µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.863529  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.635129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.863982  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0211 18:53:34.870987  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (6.466721ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.874916  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.271687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.875227  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0211 18:53:34.877004  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.477332ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.879795  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.194835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.880015  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0211 18:53:34.881301  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.052605ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.883581  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.781681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.883830  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0211 18:53:34.896679  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (12.585501ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.910346  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (11.325241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.910776  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0211 18:53:34.913832  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (2.779347ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.914196  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:34.914829  123569 wrap.go:47] GET /healthz: (1.650845ms) 500
goroutine 28685 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c3f2460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c3f2460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015853980, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc000b6de10, 0xc004b71b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3c00)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3c00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3c00)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3c00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3b00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc000b6de10, 0xc01c3b3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c3f4420, 0xc00d5b8100, 0x60dec80, 0xc000b6de10, 0xc01c3b3b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:34.918895  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.419224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.919466  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0211 18:53:34.920740  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.06079ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.923356  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.201326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.923662  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0211 18:53:34.926237  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (2.280025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.936770  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (9.622221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.937117  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0211 18:53:34.940630  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (3.172107ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.943151  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.085604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.943372  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0211 18:53:34.944884  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.225046ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.949290  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.917485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.949646  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0211 18:53:34.950837  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (892.963µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.953226  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.006087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.953455  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0211 18:53:34.955411  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.693416ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.957891  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.012334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.958330  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0211 18:53:34.959414  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (878.647µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.962046  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.165177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.962484  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0211 18:53:34.963533  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (828.367µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.967399  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.446238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.967733  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0211 18:53:34.969233  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.094114ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.971152  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.551294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.971396  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0211 18:53:34.972806  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.124385ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.974917  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.690341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.975097  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0211 18:53:34.976193  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (846.797µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.978108  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.56171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.978344  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0211 18:53:34.979322  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (779.862µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.981250  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.583554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.981452  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0211 18:53:34.982476  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (846.586µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.984404  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.459541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.984821  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0211 18:53:34.985957  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (952.35µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.987924  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.550345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.988143  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0211 18:53:34.989154  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (800.256µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.991090  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.513338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.991326  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0211 18:53:34.992412  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (849.683µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.993967  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.2407ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.994300  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0211 18:53:34.995380  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (874.504µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.997480  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.6751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:34.997693  123569 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0211 18:53:35.014279  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:35.014390  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.603883ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.014458  123569 wrap.go:47] GET /healthz: (1.292064ms) 500
goroutine 28804 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c4fb180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c4fb180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0174d6b20, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc016080f48, 0xc003b95900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc016080f48, 0xc01c555000)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc016080f48, 0xc01c555000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc016080f48, 0xc01c555000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc016080f48, 0xc01c555000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc016080f48, 0xc01c555000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc016080f48, 0xc01c555000)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc016080f48, 0xc01c555000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc016080f48, 0xc01c555000)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc016080f48, 0xc01c555000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc016080f48, 0xc01c555000)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc016080f48, 0xc01c555000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc016080f48, 0xc01c554f00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc016080f48, 0xc01c554f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c4d5c80, 0xc00d5b8100, 0x60dec80, 0xc016080f48, 0xc01c554f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:35.035356  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.624454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.035641  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0211 18:53:35.053920  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.171833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.074877  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.237131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.075265  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0211 18:53:35.093943  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.333075ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.114002  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:35.114271  123569 wrap.go:47] GET /healthz: (1.131798ms) 500
goroutine 28854 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c5fa150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c5fa150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc017a50940, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc003175b68, 0xc0074fdcc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc003175b68, 0xc01c5b3300)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc003175b68, 0xc01c5b3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc003175b68, 0xc01c5b3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc003175b68, 0xc01c5b3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc003175b68, 0xc01c5b3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc003175b68, 0xc01c5b3300)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc003175b68, 0xc01c5b3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc003175b68, 0xc01c5b3300)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc003175b68, 0xc01c5b3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc003175b68, 0xc01c5b3300)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc003175b68, 0xc01c5b3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc003175b68, 0xc01c5b3200)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc003175b68, 0xc01c5b3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c5af1a0, 0xc00d5b8100, 0x60dec80, 0xc003175b68, 0xc01c5b3200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:35.115806  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.141384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.116160  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0211 18:53:35.135280  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.431715ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.154869  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.184866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.155226  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0211 18:53:35.174198  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.466388ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.199453  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.705577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.199805  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0211 18:53:35.214208  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:35.214362  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.642011ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.214404  123569 wrap.go:47] GET /healthz: (1.11368ms) 500
goroutine 28875 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c646380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c646380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc017ee46a0, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc01670c1b8, 0xc0036c0c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc01670c1b8, 0xc01c631100)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc01670c1b8, 0xc01c631100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc01670c1b8, 0xc01c631100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc01670c1b8, 0xc01c631100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc01670c1b8, 0xc01c631100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc01670c1b8, 0xc01c631100)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc01670c1b8, 0xc01c631100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc01670c1b8, 0xc01c631100)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc01670c1b8, 0xc01c631100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc01670c1b8, 0xc01c631100)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc01670c1b8, 0xc01c631100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc01670c1b8, 0xc01c631000)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc01670c1b8, 0xc01c631000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c634ae0, 0xc00d5b8100, 0x60dec80, 0xc01670c1b8, 0xc01c631000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:35.244315  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.020723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.244597  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0211 18:53:35.253778  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.122946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.274714  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.051776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.274979  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0211 18:53:35.294351  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.467076ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.314741  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:35.314909  123569 wrap.go:47] GET /healthz: (1.690985ms) 500
goroutine 28829 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c67c000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c67c000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc018170a80, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc00c150c50, 0xc0038adcc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc00c150c50, 0xc01c65a800)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc00c150c50, 0xc01c65a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc00c150c50, 0xc01c65a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc00c150c50, 0xc01c65a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc00c150c50, 0xc01c65a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc00c150c50, 0xc01c65a800)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc00c150c50, 0xc01c65a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc00c150c50, 0xc01c65a800)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc00c150c50, 0xc01c65a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc00c150c50, 0xc01c65a800)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc00c150c50, 0xc01c65a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc00c150c50, 0xc01c65a700)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc00c150c50, 0xc01c65a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c5dd080, 0xc00d5b8100, 0x60dec80, 0xc00c150c50, 0xc01c65a700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:35.314977  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.246331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.315374  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0211 18:53:35.333968  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.257622ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.354719  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.099926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.355063  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0211 18:53:35.374390  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.567134ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.394926  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.185691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.395219  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0211 18:53:35.413921  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:35.414019  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.403797ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.414153  123569 wrap.go:47] GET /healthz: (977.736µs) 500
goroutine 28858 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c5fb1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c5fb1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc018d70a40, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc003175db0, 0xc003a09e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc003175db0, 0xc01c676f00)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc003175db0, 0xc01c676f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc003175db0, 0xc01c676f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc003175db0, 0xc01c676f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc003175db0, 0xc01c676f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc003175db0, 0xc01c676f00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc003175db0, 0xc01c676f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc003175db0, 0xc01c676f00)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc003175db0, 0xc01c676f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc003175db0, 0xc01c676f00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc003175db0, 0xc01c676f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc003175db0, 0xc01c676e00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc003175db0, 0xc01c676e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c5afc80, 0xc00d5b8100, 0x60dec80, 0xc003175db0, 0xc01c676e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:35.434847  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.212255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.435098  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0211 18:53:35.454007  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.407457ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.474861  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.093893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.475072  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0211 18:53:35.494510  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.550254ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.514339  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:35.514545  123569 wrap.go:47] GET /healthz: (1.122629ms) 500
goroutine 28898 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c5fbab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c5fbab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc018e3c000, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc016c32088, 0xc000028640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc016c32088, 0xc01c70a000)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc016c32088, 0xc01c70a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc016c32088, 0xc01c70a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc016c32088, 0xc01c70a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc016c32088, 0xc01c70a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc016c32088, 0xc01c70a000)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc016c32088, 0xc01c70a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc016c32088, 0xc01c70a000)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc016c32088, 0xc01c70a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc016c32088, 0xc01c70a000)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc016c32088, 0xc01c70a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc016c32088, 0xc01c677f00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc016c32088, 0xc01c677f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c6de540, 0xc00d5b8100, 0x60dec80, 0xc016c32088, 0xc01c677f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:35.514653  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.894308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.514906  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0211 18:53:35.537310  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.170246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.555000  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.205107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.555310  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0211 18:53:35.574164  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.426185ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.595639  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.966621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.595941  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0211 18:53:35.613892  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.115941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.614052  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:35.614271  123569 wrap.go:47] GET /healthz: (1.168667ms) 500
goroutine 28902 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c73a150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c73a150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc018e3cda0, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc016c321e0, 0xc0036d0dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc016c321e0, 0xc01c70ac00)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc016c321e0, 0xc01c70ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc016c321e0, 0xc01c70ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc016c321e0, 0xc01c70ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc016c321e0, 0xc01c70ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc016c321e0, 0xc01c70ac00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc016c321e0, 0xc01c70ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc016c321e0, 0xc01c70ac00)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc016c321e0, 0xc01c70ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc016c321e0, 0xc01c70ac00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc016c321e0, 0xc01c70ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc016c321e0, 0xc01c70ab00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc016c321e0, 0xc01c70ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c6dec00, 0xc00d5b8100, 0x60dec80, 0xc016c321e0, 0xc01c70ab00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:35.634928  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.266589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.635229  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0211 18:53:35.653860  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.260465ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.674789  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.098454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.675065  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0211 18:53:35.694002  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.335958ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.714373  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:35.714540  123569 wrap.go:47] GET /healthz: (1.376ms) 500
goroutine 28909 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c73abd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c73abd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc018f30760, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc016c32440, 0xc0036c1400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc016c32440, 0xc01c70bd00)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc016c32440, 0xc01c70bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc016c32440, 0xc01c70bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc016c32440, 0xc01c70bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc016c32440, 0xc01c70bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc016c32440, 0xc01c70bd00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc016c32440, 0xc01c70bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc016c32440, 0xc01c70bd00)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc016c32440, 0xc01c70bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc016c32440, 0xc01c70bd00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc016c32440, 0xc01c70bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc016c32440, 0xc01c70bc00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc016c32440, 0xc01c70bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c6dfc80, 0xc00d5b8100, 0x60dec80, 0xc016c32440, 0xc01c70bc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:35.714643  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.042285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.715051  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0211 18:53:35.734039  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.395673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.754980  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.329524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.755353  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0211 18:53:35.774236  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.523799ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.794910  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.093318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.795197  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0211 18:53:35.813853  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.151564ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:35.813958  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:35.814516  123569 wrap.go:47] GET /healthz: (1.301891ms) 500
goroutine 28895 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c7e4310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c7e4310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc018d8ff20, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc0158bb4d8, 0xc00503f900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1300)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1300)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1300)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1300)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1200)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc0158bb4d8, 0xc01c6b1200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c6757a0, 0xc00d5b8100, 0x60dec80, 0xc0158bb4d8, 0xc01c6b1200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:35.835127  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.441833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.835392  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0211 18:53:35.855783  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.470519ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.875164  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.395705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.875473  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0211 18:53:35.894464  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.758817ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.914729  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:35.914942  123569 wrap.go:47] GET /healthz: (1.757486ms) 500
goroutine 28925 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c7497a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c7497a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc018ff4d60, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc00c151360, 0xc0039f1680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc00c151360, 0xc01c7d0600)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc00c151360, 0xc01c7d0600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc00c151360, 0xc01c7d0600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc00c151360, 0xc01c7d0600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc00c151360, 0xc01c7d0600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc00c151360, 0xc01c7d0600)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc00c151360, 0xc01c7d0600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc00c151360, 0xc01c7d0600)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc00c151360, 0xc01c7d0600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc00c151360, 0xc01c7d0600)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc00c151360, 0xc01c7d0600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc00c151360, 0xc01c7d0500)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc00c151360, 0xc01c7d0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c731560, 0xc00d5b8100, 0x60dec80, 0xc00c151360, 0xc01c7d0500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:35.915266  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.661892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.915473  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0211 18:53:35.934066  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.390771ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.954728  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.067128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.954992  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0211 18:53:35.974102  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.387753ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.994841  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.206606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:35.995092  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0211 18:53:36.013933  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.283729ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.014320  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:36.014747  123569 wrap.go:47] GET /healthz: (1.523116ms) 500
goroutine 29013 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c73b030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c73b030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc018f31120, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc016c32508, 0xc0036c17c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc016c32508, 0xc01c868900)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc016c32508, 0xc01c868900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc016c32508, 0xc01c868900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc016c32508, 0xc01c868900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc016c32508, 0xc01c868900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc016c32508, 0xc01c868900)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc016c32508, 0xc01c868900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc016c32508, 0xc01c868900)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc016c32508, 0xc01c868900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc016c32508, 0xc01c868900)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc016c32508, 0xc01c868900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc016c32508, 0xc01c868800)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc016c32508, 0xc01c868800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c866720, 0xc00d5b8100, 0x60dec80, 0xc016c32508, 0xc01c868800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:36.034854  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.240525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.035205  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0211 18:53:36.053960  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.317281ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.074749  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.084364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.075070  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0211 18:53:36.094151  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.461596ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.114496  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:36.114681  123569 wrap.go:47] GET /healthz: (1.427039ms) 500
goroutine 29076 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c4395e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c4395e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015bc1700, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc0000f7a70, 0xc014ca6000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a9000)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a9000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a9000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a9000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a9000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a9000)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a9000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a9000)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a9000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a9000)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a9000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a8f00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc0000f7a70, 0xc01c4a8f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c3e1da0, 0xc00d5b8100, 0x60dec80, 0xc0000f7a70, 0xc01c4a8f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:36.114779  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.119961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.115048  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0211 18:53:36.134117  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.369303ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.154909  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.235649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.155355  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0211 18:53:36.174140  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.322253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.195074  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.286079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.195423  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0211 18:53:36.213762  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.09258ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.213880  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:36.214031  123569 wrap.go:47] GET /healthz: (880.283µs) 500
goroutine 29071 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c8d2930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c8d2930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0191252a0, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc00c151590, 0xc000077a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc00c151590, 0xc01c902700)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc00c151590, 0xc01c902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc00c151590, 0xc01c902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc00c151590, 0xc01c902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc00c151590, 0xc01c902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc00c151590, 0xc01c902700)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc00c151590, 0xc01c902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc00c151590, 0xc01c902700)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc00c151590, 0xc01c902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc00c151590, 0xc01c902700)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc00c151590, 0xc01c902700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc00c151590, 0xc01c902600)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc00c151590, 0xc01c902600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c8f88a0, 0xc00d5b8100, 0x60dec80, 0xc00c151590, 0xc01c902600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:36.234778  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.950789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.235026  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0211 18:53:36.254297  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.5039ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.275025  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.302267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.275310  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0211 18:53:36.294972  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.486905ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.314709  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:36.314883  123569 wrap.go:47] GET /healthz: (1.347063ms) 500
goroutine 29073 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c8d2cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c8d2cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0191256a0, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc00c1515c8, 0xc0039f1b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc00c1515c8, 0xc01c902e00)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc00c1515c8, 0xc01c902e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc00c1515c8, 0xc01c902e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc00c1515c8, 0xc01c902e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc00c1515c8, 0xc01c902e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc00c1515c8, 0xc01c902e00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc00c1515c8, 0xc01c902e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc00c1515c8, 0xc01c902e00)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc00c1515c8, 0xc01c902e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc00c1515c8, 0xc01c902e00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc00c1515c8, 0xc01c902e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc00c1515c8, 0xc01c902d00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc00c1515c8, 0xc01c902d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c8f8c60, 0xc00d5b8100, 0x60dec80, 0xc00c1515c8, 0xc01c902d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:36.315470  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.840666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.315759  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0211 18:53:36.333916  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.270776ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.354761  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.119471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.355103  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0211 18:53:36.374234  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.593598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.395558  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.494712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.395934  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0211 18:53:36.414059  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:36.414100  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.473178ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.414270  123569 wrap.go:47] GET /healthz: (1.073207ms) 500
goroutine 29055 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c827a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c827a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc019180820, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc01670ca50, 0xc0036c1e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc01670ca50, 0xc01c95a600)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc01670ca50, 0xc01c95a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc01670ca50, 0xc01c95a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc01670ca50, 0xc01c95a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc01670ca50, 0xc01c95a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc01670ca50, 0xc01c95a600)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc01670ca50, 0xc01c95a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc01670ca50, 0xc01c95a600)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc01670ca50, 0xc01c95a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc01670ca50, 0xc01c95a600)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc01670ca50, 0xc01c95a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc01670ca50, 0xc01c95a500)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc01670ca50, 0xc01c95a500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c8872c0, 0xc00d5b8100, 0x60dec80, 0xc01670ca50, 0xc01c95a500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:36.434810  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.125953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.435129  123569 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0211 18:53:36.454413  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.640815ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.456447  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.529642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.476166  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.246245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.476447  123569 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0211 18:53:36.494205  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.390138ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.496004  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.33062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.514314  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:36.514552  123569 wrap.go:47] GET /healthz: (1.331322ms) 500
goroutine 29111 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c9b2b60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c9b2b60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0191d7760, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc016c328a8, 0xc01ca2a140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc016c328a8, 0xc01c999e00)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc016c328a8, 0xc01c999e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc016c328a8, 0xc01c999e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc016c328a8, 0xc01c999e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc016c328a8, 0xc01c999e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc016c328a8, 0xc01c999e00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc016c328a8, 0xc01c999e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc016c328a8, 0xc01c999e00)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc016c328a8, 0xc01c999e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc016c328a8, 0xc01c999e00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc016c328a8, 0xc01c999e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc016c328a8, 0xc01c999d00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc016c328a8, 0xc01c999d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01ca08600, 0xc00d5b8100, 0x60dec80, 0xc016c328a8, 0xc01c999d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:36.514802  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.001119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.515045  123569 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0211 18:53:36.534068  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.358305ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.536296  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.552143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.555357  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.605549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.555649  123569 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0211 18:53:36.574074  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.310278ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.576022  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.367694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.595115  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.395033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.595409  123569 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0211 18:53:36.614077  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.368144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:36.614119  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:36.614382  123569 wrap.go:47] GET /healthz: (1.227749ms) 500
goroutine 29156 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01ca04a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01ca04a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc019294500, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc00cf9b0e0, 0xc000028c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5500)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5500)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5500)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5500)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5400)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc00cf9b0e0, 0xc01c9e5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01ca6c420, 0xc00d5b8100, 0x60dec80, 0xc00cf9b0e0, 0xc01c9e5400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56298]
I0211 18:53:36.616008  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.320108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.634693  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.978979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.634927  123569 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0211 18:53:36.653642  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.06123ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.655857  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.641566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.674844  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.018066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.675099  123569 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0211 18:53:36.693724  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.080846ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.695568  123569 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.268696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.714274  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:36.714453  123569 wrap.go:47] GET /healthz: (1.312254ms) 500
goroutine 29173 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01cab8cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01cab8cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0192d6600, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc016c32be0, 0xc0036d1180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc016c32be0, 0xc01cab5400)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc016c32be0, 0xc01cab5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc016c32be0, 0xc01cab5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc016c32be0, 0xc01cab5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc016c32be0, 0xc01cab5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc016c32be0, 0xc01cab5400)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc016c32be0, 0xc01cab5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc016c32be0, 0xc01cab5400)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc016c32be0, 0xc01cab5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc016c32be0, 0xc01cab5400)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc016c32be0, 0xc01cab5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc016c32be0, 0xc01cab5300)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc016c32be0, 0xc01cab5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01caee6c0, 0xc00d5b8100, 0x60dec80, 0xc016c32be0, 0xc01cab5300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:36.715053  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.3158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.715335  123569 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0211 18:53:36.734022  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.282707ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.735813  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.309415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.757659  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.504841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.758058  123569 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0211 18:53:36.774211  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.379786ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.776091  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.366533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.794858  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.060366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.795092  123569 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0211 18:53:36.814199  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.377904ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.814281  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:36.814461  123569 wrap.go:47] GET /healthz: (1.33269ms) 500
goroutine 29190 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01c9a3d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01c9a3d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0192fcd80, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc016081ea0, 0xc01ca2a500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f100)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f100)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f100)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f100)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f000)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc016081ea0, 0xc01cb2f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c96d980, 0xc00d5b8100, 0x60dec80, 0xc016081ea0, 0xc01cb2f000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:36.816085  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.466783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.834963  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.271309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.835350  123569 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0211 18:53:36.854239  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.469856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.856259  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.437308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.874946  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.174241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.875294  123569 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0211 18:53:36.894345  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.558597ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.896495  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.517369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.914470  123569 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0211 18:53:36.914668  123569 wrap.go:47] GET /healthz: (1.336643ms) 500
goroutine 29194 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01cb687e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01cb687e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc019354300, 0x1f4)
net/http.Error(0x7f4baaa21718, 0xc016081f90, 0xc000029180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f4baaa21718, 0xc016081f90, 0xc01cb2fe00)
net/http.HandlerFunc.ServeHTTP(0xc00e13f6e0, 0x7f4baaa21718, 0xc016081f90, 0xc01cb2fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01940cb40, 0x7f4baaa21718, 0xc016081f90, 0xc01cb2fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc0033436c0, 0x7f4baaa21718, 0xc016081f90, 0xc01cb2fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c1830, 0xe, 0xc018118510, 0xc0033436c0, 0x7f4baaa21718, 0xc016081f90, 0xc01cb2fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f4baaa21718, 0xc016081f90, 0xc01cb2fe00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f09c0, 0x7f4baaa21718, 0xc016081f90, 0xc01cb2fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f4baaa21718, 0xc016081f90, 0xc01cb2fe00)
net/http.HandlerFunc.ServeHTTP(0xc0148aa600, 0x7f4baaa21718, 0xc016081f90, 0xc01cb2fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f4baaa21718, 0xc016081f90, 0xc01cb2fe00)
net/http.HandlerFunc.ServeHTTP(0xc00d9f0a00, 0x7f4baaa21718, 0xc016081f90, 0xc01cb2fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f4baaa21718, 0xc016081f90, 0xc01cb2fd00)
net/http.HandlerFunc.ServeHTTP(0xc0149ed040, 0x7f4baaa21718, 0xc016081f90, 0xc01cb2fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01cb960c0, 0xc00d5b8100, 0x60dec80, 0xc016081f90, 0xc01cb2fd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:56304]
I0211 18:53:36.915270  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.473452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.915550  123569 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
E0211 18:53:36.923403  123569 event.go:200] Unable to write event: 'Patch http://127.0.0.1:45341/api/v1/namespaces/prebind-plugin3a720506-2e2e-11e9-aa1d-0242ac110002/events/test-pod.158263ff89188a0e: dial tcp 127.0.0.1:45341: connect: connection refused' (may retry after sleeping)
I0211 18:53:36.934284  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.590877ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.937372  123569 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.009402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.954771  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.090038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.955005  123569 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0211 18:53:36.974232  123569 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.549598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.976032  123569 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.297067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.994502  123569 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.828609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:36.994761  123569 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0211 18:53:37.014248  123569 wrap.go:47] GET /healthz: (1.013044ms) 200 [Go-http-client/1.1 127.0.0.1:56298]
W0211 18:53:37.014934  123569 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 18:53:37.014973  123569 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 18:53:37.014999  123569 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 18:53:37.015008  123569 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 18:53:37.015017  123569 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 18:53:37.015026  123569 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 18:53:37.015034  123569 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 18:53:37.015045  123569 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 18:53:37.015063  123569 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0211 18:53:37.015072  123569 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0211 18:53:37.015138  123569 factory.go:331] Creating scheduler from algorithm provider 'DefaultProvider'
I0211 18:53:37.015149  123569 factory.go:412] Creating scheduler with fit predicates 'map[MaxGCEPDVolumeCount:{} MaxEBSVolumeCount:{} MaxAzureDiskVolumeCount:{} NoDiskConflict:{} CheckNodeMemoryPressure:{} CheckVolumeBinding:{} MatchInterPodAffinity:{} GeneralPredicates:{} CheckNodeCondition:{} PodToleratesNodeTaints:{} NoVolumeZoneConflict:{} MaxCSIVolumeCountPred:{} CheckNodeDiskPressure:{} CheckNodePIDPressure:{}]' and priority functions 'map[NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{}]'
I0211 18:53:37.015318  123569 controller_utils.go:1021] Waiting for caches to sync for scheduler controller
I0211 18:53:37.015687  123569 reflector.go:132] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:210
I0211 18:53:37.015701  123569 reflector.go:170] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:210
I0211 18:53:37.016576  123569 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (635.507µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56298]
I0211 18:53:37.017344  123569 get.go:251] Starting watch for /api/v1/pods, rv=19414 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m52s
I0211 18:53:37.115501  123569 shared_informer.go:123] caches populated
I0211 18:53:37.115540  123569 controller_utils.go:1028] Caches are synced for scheduler controller
I0211 18:53:37.116269  123569 reflector.go:132] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.116304  123569 reflector.go:170] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.116497  123569 reflector.go:132] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.116525  123569 reflector.go:170] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.116548  123569 reflector.go:132] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.116564  123569 reflector.go:170] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.116709  123569 reflector.go:132] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.116720  123569 reflector.go:170] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.116890  123569 reflector.go:132] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.116907  123569 reflector.go:170] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.117061  123569 reflector.go:132] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.117075  123569 reflector.go:170] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.117288  123569 reflector.go:132] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.117302  123569 reflector.go:170] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.117322  123569 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (620.166µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56304]
I0211 18:53:37.117330  123569 reflector.go:132] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.117345  123569 reflector.go:170] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.117997  123569 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (423.242µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56570]
I0211 18:53:37.117997  123569 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (656.575µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56568]
I0211 18:53:37.118131  123569 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (473.658µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56564]
I0211 18:53:37.118316  123569 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=19417 labels= fields= timeout=8m17s
I0211 18:53:37.118336  123569 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (553.195µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56576]
I0211 18:53:37.118467  123569 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (504.143µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56574]
I0211 18:53:37.118495  123569 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (373.265µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56572]
I0211 18:53:37.118775  123569 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (501.769µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56562]
I0211 18:53:37.119046  123569 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=19418 labels= fields= timeout=7m18s
I0211 18:53:37.119388  123569 get.go:251] Starting watch for /api/v1/services, rv=19425 labels= fields= timeout=7m59s
I0211 18:53:37.119474  123569 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=19414 labels= fields= timeout=6m7s
I0211 18:53:37.119565  123569 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=19416 labels= fields= timeout=6m55s
I0211 18:53:37.119579  123569 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=19414 labels= fields= timeout=7m45s
I0211 18:53:37.119713  123569 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=19414 labels= fields= timeout=9m43s
I0211 18:53:37.119834  123569 get.go:251] Starting watch for /api/v1/nodes, rv=19414 labels= fields= timeout=9m59s
I0211 18:53:37.120286  123569 reflector.go:132] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.120309  123569 reflector.go:170] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:132
I0211 18:53:37.121016  123569 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (448.696µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56586]
I0211 18:53:37.121881  123569 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=19416 labels= fields= timeout=5m44s
I0211 18:53:37.216033  123569 shared_informer.go:123] caches populated
I0211 18:53:37.316263  123569 shared_informer.go:123] caches populated
I0211 18:53:37.416549  123569 shared_informer.go:123] caches populated
I0211 18:53:37.516801  123569 shared_informer.go:123] caches populated
I0211 18:53:37.616995  123569 shared_informer.go:123] caches populated
I0211 18:53:37.717246  123569 shared_informer.go:123] caches populated
I0211 18:53:37.817499  123569 shared_informer.go:123] caches populated
I0211 18:53:37.917797  123569 shared_informer.go:123] caches populated
I0211 18:53:38.018024  123569 shared_informer.go:123] caches populated
I0211 18:53:38.118474  123569 shared_informer.go:123] caches populated
I0211 18:53:38.119010  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:38.119038  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:38.119291  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:38.119435  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:38.119436  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:38.146155  123569 wrap.go:47] POST /api/v1/nodes: (27.239769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.148797  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.068502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.149135  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0
I0211 18:53:38.149160  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0
I0211 18:53:38.149340  123569 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0", node "node1"
I0211 18:53:38.149365  123569 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0211 18:53:38.149424  123569 factory.go:733] Attempting to bind rpod-0 to node1
I0211 18:53:38.151526  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1
I0211 18:53:38.151549  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1
I0211 18:53:38.151571  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0/binding: (1.663155ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0211 18:53:38.151671  123569 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1", node "node1"
I0211 18:53:38.151684  123569 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0211 18:53:38.151722  123569 factory.go:733] Attempting to bind rpod-1 to node1
I0211 18:53:38.151798  123569 scheduler.go:571] pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0211 18:53:38.151841  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.476308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.153646  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1/binding: (1.717935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0211 18:53:38.153830  123569 scheduler.go:571] pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0211 18:53:38.154316  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.172016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.156342  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.538587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.255010  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0: (2.221613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.357845  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (1.938964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.358260  123569 preemption_test.go:561] Creating the preemptor pod...
I0211 18:53:38.361017  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.512308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.361278  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:38.361303  123569 preemption_test.go:567] Creating additional pods...
I0211 18:53:38.361304  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:38.361557  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.361634  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.363535  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.99997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.363775  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.193751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56596]
I0211 18:53:38.364354  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.748552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56598]
I0211 18:53:38.364425  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/status: (2.312554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56594]
I0211 18:53:38.366430  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.070168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56596]
I0211 18:53:38.366527  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.684411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56598]
I0211 18:53:38.366983  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.368616  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.652824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56596]
I0211 18:53:38.369429  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/status: (1.995429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.371880  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.371633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56596]
I0211 18:53:38.373684  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (3.779633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.373906  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:38.373919  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:38.373986  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.671921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56596]
I0211 18:53:38.374080  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.374124  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.375475  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.371771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.375782  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.427559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56596]
I0211 18:53:38.376845  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.924939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0211 18:53:38.377255  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/status: (2.433989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56600]
I0211 18:53:38.379271  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.195223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56600]
I0211 18:53:38.379422  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/preemptor-pod.1582640a93d5f65d: (2.510861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56596]
I0211 18:53:38.379456  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.129482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56602]
I0211 18:53:38.379554  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:38.379568  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:38.379717  123569 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod", node "node1"
I0211 18:53:38.379734  123569 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0211 18:53:38.379781  123569 factory.go:733] Attempting to bind preemptor-pod to node1
I0211 18:53:38.379968  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-1
I0211 18:53:38.379990  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-1
I0211 18:53:38.380085  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.380132  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.382146  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.189157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56600]
I0211 18:53:38.383719  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (2.811578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56606]
I0211 18:53:38.384225  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/binding: (3.374397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.384375  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1/status: (3.057636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56604]
I0211 18:53:38.384809  123569 scheduler.go:571] pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0211 18:53:38.385752  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.430957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56600]
I0211 18:53:38.386152  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (1.228706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.386540  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (5.272886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56608]
I0211 18:53:38.387143  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.388050  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.593114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56600]
I0211 18:53:38.388200  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-6
I0211 18:53:38.388248  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-6
I0211 18:53:38.388357  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.388406  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.389583  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.016917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.391201  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.424213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56606]
I0211 18:53:38.391296  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (1.505268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.391784  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.544239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56612]
I0211 18:53:38.391865  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6/status: (3.166427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56600]
I0211 18:53:38.394000  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.244803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56606]
I0211 18:53:38.394001  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (1.533748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.394752  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.394921  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7
I0211 18:53:38.394941  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7
I0211 18:53:38.395057  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.395132  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.396563  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (1.164451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.396844  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.308692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56606]
I0211 18:53:38.397519  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.571867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56614]
I0211 18:53:38.397589  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7/status: (2.036138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56610]
I0211 18:53:38.399585  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (1.166086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56614]
I0211 18:53:38.399793  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.935735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56606]
I0211 18:53:38.399884  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.400076  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11
I0211 18:53:38.400101  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11
I0211 18:53:38.400233  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.400291  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.402489  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.972554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.402490  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11/status: (1.733006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56616]
I0211 18:53:38.403310  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.984679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56614]
I0211 18:53:38.403513  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.477083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56618]
I0211 18:53:38.404764  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.744728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56616]
I0211 18:53:38.405070  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.405275  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:38.405296  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:38.405387  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.405439  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.406080  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.124686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56614]
I0211 18:53:38.406992  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.263389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.408449  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.864304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56614]
I0211 18:53:38.408857  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.54713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56620]
I0211 18:53:38.410625  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.659208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56614]
I0211 18:53:38.411159  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13/status: (3.108527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56616]
I0211 18:53:38.413318  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.415033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.413590  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.413756  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14
I0211 18:53:38.413813  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14
I0211 18:53:38.413884  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.981425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56620]
I0211 18:53:38.413960  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.414035  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.418855  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14/status: (4.484347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56620]
I0211 18:53:38.418873  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (4.516132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.418999  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (3.996581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56622]
I0211 18:53:38.419810  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (3.935189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56624]
I0211 18:53:38.420822  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (1.261371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56620]
I0211 18:53:38.421058  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.571649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56582]
I0211 18:53:38.421081  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.421327  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:38.421363  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:38.421496  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.421570  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.422789  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.013676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56624]
I0211 18:53:38.423872  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.442965ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56628]
I0211 18:53:38.424045  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.449816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56620]
I0211 18:53:38.424110  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17/status: (1.903679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56626]
I0211 18:53:38.425633  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.109941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56624]
I0211 18:53:38.425880  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.426028  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.501408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56628]
I0211 18:53:38.426033  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:38.426204  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:38.426350  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.426413  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.428211  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.162584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56630]
I0211 18:53:38.428387  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.835322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56628]
I0211 18:53:38.428944  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.648037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.429713  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19/status: (3.011195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56624]
I0211 18:53:38.430266  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.421564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56628]
I0211 18:53:38.431581  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.340363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.431898  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.432331  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.631653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56628]
I0211 18:53:38.432383  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:38.432735  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:38.432899  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.432991  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.435205  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.758954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.435830  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.353038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56634]
I0211 18:53:38.436503  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.597732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56636]
I0211 18:53:38.436740  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21/status: (3.315281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56630]
I0211 18:53:38.437822  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.794257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.438363  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.142192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56636]
I0211 18:53:38.438677  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.438824  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:38.438844  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:38.438971  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.439020  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.440999  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.252267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.441008  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.508061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56638]
I0211 18:53:38.441216  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24/status: (1.87465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56634]
I0211 18:53:38.441242  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.610731ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56640]
I0211 18:53:38.442996  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.422993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56638]
I0211 18:53:38.443483  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.410494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56640]
I0211 18:53:38.443841  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.444030  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:38.444049  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:38.444358  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.444450  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.445422  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.982695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56638]
I0211 18:53:38.445951  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.162496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.446663  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.579434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56642]
I0211 18:53:38.447421  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.501525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56638]
I0211 18:53:38.447510  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27/status: (2.661737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56640]
I0211 18:53:38.449231  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.326364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.449376  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.49563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56642]
I0211 18:53:38.449494  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.449672  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:38.449691  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:38.449833  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.449881  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.451492  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.144962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56644]
I0211 18:53:38.452374  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.005563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56646]
I0211 18:53:38.452448  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.553825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56642]
I0211 18:53:38.452761  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29/status: (2.684482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.454678  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.307328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.454918  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.455141  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:38.455263  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:38.455369  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.455442  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.455466  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.391399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56646]
I0211 18:53:38.456811  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.096844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.457864  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31/status: (2.112315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56644]
I0211 18:53:38.458346  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.06975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56650]
I0211 18:53:38.458359  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.089725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56648]
I0211 18:53:38.460339  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.244565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56644]
I0211 18:53:38.460681  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.460836  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:38.460857  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:38.460934  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.460994  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.462262  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.036488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56644]
I0211 18:53:38.462528  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.253843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.463073  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.500643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56652]
I0211 18:53:38.463647  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33/status: (2.033952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56654]
I0211 18:53:38.464665  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.712781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.465726  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.615853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56652]
I0211 18:53:38.465965  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.467229  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.645197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56632]
I0211 18:53:38.468267  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:38.468289  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:38.468398  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.468447  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.469552  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.703501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56652]
I0211 18:53:38.470785  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.560043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56656]
I0211 18:53:38.470812  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35/status: (2.091001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56644]
I0211 18:53:38.471362  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.872715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56658]
I0211 18:53:38.471789  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.72751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56652]
I0211 18:53:38.472959  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.628185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56644]
I0211 18:53:38.473294  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.473521  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:38.473540  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:38.473698  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.473756  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.474020  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.58059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56658]
I0211 18:53:38.475223  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.178799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56656]
I0211 18:53:38.475749  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.42295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56658]
I0211 18:53:38.476311  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38/status: (2.287191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56644]
I0211 18:53:38.476733  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.222336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56660]
I0211 18:53:38.478480  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.460617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56658]
I0211 18:53:38.478707  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.541279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56660]
I0211 18:53:38.478807  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.478969  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:38.478994  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:38.479158  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.479248  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.480871  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.633938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56658]
I0211 18:53:38.481515  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40/status: (2.021941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56656]
I0211 18:53:38.481676  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (1.900904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56662]
I0211 18:53:38.482266  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.346283ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56664]
I0211 18:53:38.483010  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.674437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56658]
I0211 18:53:38.483483  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (1.388662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56662]
I0211 18:53:38.483744  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.483912  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:38.483932  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:38.484047  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.484095  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.485139  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.671398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56664]
I0211 18:53:38.485430  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.080646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56656]
I0211 18:53:38.486296  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43/status: (1.939367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56662]
I0211 18:53:38.486793  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.289366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56664]
I0211 18:53:38.487325  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.985018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56666]
I0211 18:53:38.488047  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.201081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56662]
I0211 18:53:38.488325  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.488524  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:38.488546  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:38.488711  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.488777  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.489710  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.344916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56664]
I0211 18:53:38.490292  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.1767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56656]
I0211 18:53:38.491018  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.611177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56668]
I0211 18:53:38.491670  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45/status: (2.444606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56666]
I0211 18:53:38.491931  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.756464ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56664]
I0211 18:53:38.493474  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.370365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56668]
I0211 18:53:38.493908  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.494083  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:38.494102  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:38.494293  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.494343  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.495781  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.083005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56656]
I0211 18:53:38.496468  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.526264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56670]
I0211 18:53:38.496785  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47/status: (2.232034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56664]
I0211 18:53:38.498537  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.239548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56670]
I0211 18:53:38.498900  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.499101  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:38.499121  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:38.499238  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.499302  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.500975  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.410141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56656]
I0211 18:53:38.501540  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.498345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56672]
I0211 18:53:38.502099  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49/status: (2.570769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56670]
I0211 18:53:38.503852  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.2072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56672]
I0211 18:53:38.504246  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.504456  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:38.504515  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:38.504700  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.504815  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.517957  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47/status: (2.13733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56672]
I0211 18:53:38.521333  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (5.5345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56656]
I0211 18:53:38.523008  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-47.1582640a9bbf60a7: (7.162535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56674]
I0211 18:53:38.523276  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.368911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56656]
I0211 18:53:38.524655  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.524898  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:38.524917  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:38.525083  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.525144  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.527929  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.79378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56672]
I0211 18:53:38.528279  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49/status: (1.669881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56674]
I0211 18:53:38.528346  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-49.1582640a9c0af1cd: (2.053341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56676]
I0211 18:53:38.529940  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.245471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56674]
I0211 18:53:38.530247  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.530431  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:38.530449  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:38.530595  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.530680  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.533128  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45/status: (1.865628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56674]
I0211 18:53:38.533360  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.771394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56672]
I0211 18:53:38.534098  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-45.1582640a9b6a5186: (2.535009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56678]
I0211 18:53:38.535127  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.222844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56672]
I0211 18:53:38.535477  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.535663  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:38.535683  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:38.535804  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.535863  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.537439  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.324562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56674]
I0211 18:53:38.538447  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.806339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56680]
I0211 18:53:38.538475  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48/status: (2.400041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56678]
I0211 18:53:38.540210  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.227617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56680]
I0211 18:53:38.540541  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.540865  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:38.540885  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:38.541010  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.541085  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.542814  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.402098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56674]
I0211 18:53:38.543807  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43/status: (2.397039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56680]
I0211 18:53:38.545142  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-43.1582640a9b22f376: (2.781779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56682]
I0211 18:53:38.546257  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.305812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56680]
I0211 18:53:38.546598  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.546819  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:38.546861  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:38.547015  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.547071  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.548851  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.276486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56674]
I0211 18:53:38.549337  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48/status: (1.868197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56682]
I0211 18:53:38.550941  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.072248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56682]
I0211 18:53:38.551167  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.551510  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:38.551532  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:38.551641  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-48.1582640a9e38d9e5: (2.63223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56684]
I0211 18:53:38.551653  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.551693  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.553063  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.127034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56682]
I0211 18:53:38.553866  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46/status: (1.902795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56674]
I0211 18:53:38.554532  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.220879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.555899  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.289224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56674]
I0211 18:53:38.556258  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.556570  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:38.556588  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:38.556842  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.556904  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.558412  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (1.278758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56682]
I0211 18:53:38.559074  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40/status: (1.944878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.561014  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (1.492856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.561014  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-40.1582640a9ad8f825: (2.881636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56688]
I0211 18:53:38.561416  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.561695  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:38.561714  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:38.561789  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.561839  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.563694  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46/status: (1.654457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.564417  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.542552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56682]
I0211 18:53:38.567086  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.146255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56682]
I0211 18:53:38.567430  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.567630  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:38.567656  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:38.567794  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.567845  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.568206  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-46.1582640a9f2a7173: (3.677039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.570091  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.46297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.570259  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.276558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56692]
I0211 18:53:38.571116  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44/status: (2.996075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56690]
I0211 18:53:38.572943  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.214345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56692]
I0211 18:53:38.573286  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.573482  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:38.573529  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:38.573735  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.573797  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.575351  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.241173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.576025  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.609938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.576372  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42/status: (2.253929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56692]
I0211 18:53:38.578004  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.176643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.578294  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.578460  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:38.578480  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:38.578585  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.578664  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.580055  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.128676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.580729  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38/status: (1.815837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.582305  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.128486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.582385  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-38.1582640a9a85346d: (2.596563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56696]
I0211 18:53:38.582709  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.582960  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:38.583014  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:38.583199  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.583259  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.585003  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.153988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.585448  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42/status: (1.961598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.586923  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.035526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.587088  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-42.1582640aa07baca8: (2.316843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56698]
I0211 18:53:38.587220  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.587363  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:38.587381  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:38.587498  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.587552  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.589096  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.266585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.589888  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41/status: (2.071278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.608162  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (19.926679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56700]
I0211 18:53:38.608714  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (16.122459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.608768  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (18.416481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.609123  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.609383  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:38.609437  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:38.609629  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.609741  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.611927  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.801144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.612042  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.88955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.612581  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39/status: (2.193619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56702]
I0211 18:53:38.614615  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.431917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.615013  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.615251  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:38.615268  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:38.615398  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.615453  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.617251  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.486238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.617843  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41/status: (2.112928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.619755  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-41.1582640aa14d99e7: (3.074228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56704]
I0211 18:53:38.619824  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.517861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56686]
I0211 18:53:38.620167  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.620375  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:38.620399  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:38.620489  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.620655  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.622700  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.417539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.623539  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39/status: (2.657865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56704]
I0211 18:53:38.624905  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-39.1582640aa2a01a3b: (3.187655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56706]
I0211 18:53:38.625488  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.510059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56704]
I0211 18:53:38.625823  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.626055  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:38.626102  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:38.626276  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.626338  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.628141  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.516288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.628623  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.470284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56708]
I0211 18:53:38.640053  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37/status: (13.378777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56706]
I0211 18:53:38.642523  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.799723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56708]
I0211 18:53:38.642851  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.643080  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:38.643095  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:38.643238  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.643306  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.645140  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.430762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.645739  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33/status: (2.143649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56708]
I0211 18:53:38.647457  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-33.1582640a99c27515: (2.773261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56710]
I0211 18:53:38.647656  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.475149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56708]
I0211 18:53:38.648031  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.648220  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:38.648241  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:38.648338  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.648387  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.650021  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.227635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.650423  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37/status: (1.742553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56710]
I0211 18:53:38.652251  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.24415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56710]
I0211 18:53:38.652758  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.652939  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:38.652963  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:38.653061  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.653094  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-37.1582640aa39d695a: (3.41086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56712]
I0211 18:53:38.653117  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.654693  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.289827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.655451  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36/status: (2.0706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56710]
I0211 18:53:38.655939  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.932988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56714]
I0211 18:53:38.657998  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.507484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56710]
I0211 18:53:38.658266  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.658438  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:38.658457  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:38.658538  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.658592  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.660428  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.507794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.660836  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31/status: (1.902322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56710]
I0211 18:53:38.669385  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.53648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56710]
I0211 18:53:38.669695  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.670052  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:38.670065  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:38.670157  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.670229  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.671137  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-31.1582640a996dbd0a: (11.614538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56716]
I0211 18:53:38.671981  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.375334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.672691  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36/status: (2.031134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56710]
I0211 18:53:38.674439  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.127121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56710]
I0211 18:53:38.674747  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.674791  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-36.1582640aa535efd4: (3.021835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56716]
I0211 18:53:38.674900  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:38.674922  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:38.675038  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.675089  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.677932  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.088275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56718]
I0211 18:53:38.678569  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34/status: (3.240974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56710]
I0211 18:53:38.678920  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (3.09706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.689887  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.991367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.690212  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.690392  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:38.690409  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:38.690505  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.690544  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.693341  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29/status: (1.801324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.693806  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (2.334837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56718]
I0211 18:53:38.695911  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.59801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.696351  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.696818  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-29.1582640a9918ecf9: (2.60975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56718]
I0211 18:53:38.697353  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:38.697372  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:38.697498  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.697559  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.700523  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.138952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56722]
I0211 18:53:38.700946  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (2.278041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56720]
I0211 18:53:38.701435  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32/status: (3.576728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56694]
I0211 18:53:38.707869  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (2.095037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56720]
I0211 18:53:38.708162  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.708405  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:38.708418  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:38.708510  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.708557  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.711194  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.453854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56722]
I0211 18:53:38.712973  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.040007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56724]
I0211 18:53:38.713694  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (2.210916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56722]
I0211 18:53:38.713914  123569 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0211 18:53:38.713980  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30/status: (4.114276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56720]
I0211 18:53:38.716090  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.234221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56720]
I0211 18:53:38.716127  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (2.005058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56722]
I0211 18:53:38.716316  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.716835  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:38.716849  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:38.716932  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.716968  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.718110  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (1.627989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56720]
I0211 18:53:38.720312  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (1.861042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56720]
I0211 18:53:38.720837  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (2.604173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56724]
I0211 18:53:38.724615  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (2.02296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56720]
I0211 18:53:38.728203  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-32.1582640aa7dc0871: (6.433784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56728]
I0211 18:53:38.728757  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (3.779453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56720]
I0211 18:53:38.731343  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (1.317688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56720]
I0211 18:53:38.733454  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32/status: (12.510821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56726]
I0211 18:53:38.735398  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (3.575547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56728]
I0211 18:53:38.736711  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (2.880182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56726]
I0211 18:53:38.736916  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (1.088296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56728]
I0211 18:53:38.737025  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.737595  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:38.737626  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:38.737761  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.737806  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.741313  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30/status: (2.419099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56724]
I0211 18:53:38.741542  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (2.642717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56726]
I0211 18:53:38.743242  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.349254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56726]
I0211 18:53:38.743499  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.743979  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-30.1582640aa883e1f7: (4.373765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56732]
I0211 18:53:38.744030  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.649046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56724]
I0211 18:53:38.744465  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:38.744480  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:38.744571  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.744622  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.745308  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (1.974325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56734]
I0211 18:53:38.748162  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27/status: (2.400293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56724]
I0211 18:53:38.748710  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (3.213795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56726]
I0211 18:53:38.749065  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (2.685853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56738]
I0211 18:53:38.750292  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-27.1582640a98c5feed: (4.504281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56734]
I0211 18:53:38.751440  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.697668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56730]
I0211 18:53:38.751990  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.752417  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.962286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56738]
I0211 18:53:38.753497  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:38.753547  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:38.753725  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.753820  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.756490  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (2.241221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56736]
I0211 18:53:38.757261  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (4.146935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56726]
I0211 18:53:38.759745  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.949497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56742]
I0211 18:53:38.760146  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (2.454453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56726]
I0211 18:53:38.760628  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28/status: (4.027291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.763841  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (3.090606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56742]
I0211 18:53:38.837341  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (72.808813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56742]
I0211 18:53:38.837880  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (76.884999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.838229  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.838430  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:38.838467  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:38.838641  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.838706  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.840093  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.536223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.840527  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.208352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56744]
I0211 18:53:38.841580  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24/status: (2.629008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56736]
I0211 18:53:38.841847  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.150365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.842384  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-24.1582640a98732da0: (2.672329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56746]
I0211 18:53:38.843008  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (974.479µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56736]
I0211 18:53:38.843328  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.843481  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:38.843516  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:38.843491  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.240032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.843660  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.843704  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.845121  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.232763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56746]
I0211 18:53:38.845267  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.149887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.846712  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28/status: (2.505342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56744]
I0211 18:53:38.847382  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.341937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.848268  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-28.1582640aab36678e: (3.75876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56748]
I0211 18:53:38.848926  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.041169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56744]
I0211 18:53:38.849487  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.849833  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:38.849855  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:38.849986  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.850041  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.850071  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.997334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.851879  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.40174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.852336  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.048467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56746]
I0211 18:53:38.852951  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21/status: (2.180942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56748]
I0211 18:53:38.854228  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.458212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56746]
I0211 18:53:38.854562  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-21.1582640a98170406: (3.439262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56750]
I0211 18:53:38.855922  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.859021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56748]
I0211 18:53:38.856331  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.686641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56746]
I0211 18:53:38.856799  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.856965  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:38.856983  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:38.857069  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.857126  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.858117  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.356218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56750]
I0211 18:53:38.859713  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.656353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56754]
I0211 18:53:38.860048  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26/status: (2.004893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56752]
I0211 18:53:38.860069  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (2.623188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.862011  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.091346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.862480  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.54273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56754]
I0211 18:53:38.862486  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.862664  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:38.862685  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:38.862787  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.862883  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.863976  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.146248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56754]
I0211 18:53:38.865540  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.814643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.866095  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.6943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56754]
I0211 18:53:38.866477  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25/status: (3.321003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.868330  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.626372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56754]
I0211 18:53:38.868336  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.284881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56740]
I0211 18:53:38.868344  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (4.63922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56758]
I0211 18:53:38.868785  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.868944  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:38.868958  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:38.869155  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.869259  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.870549  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.696777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56754]
I0211 18:53:38.872325  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.39922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56754]
I0211 18:53:38.873496  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.503018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56762]
I0211 18:53:38.874426  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.173638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56754]
I0211 18:53:38.875041  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-26.1582640ab15ef464: (5.009535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56760]
I0211 18:53:38.876077  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.213321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56754]
I0211 18:53:38.877402  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26/status: (7.489283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.877495  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.009692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56760]
I0211 18:53:38.879212  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.250846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56762]
I0211 18:53:38.879572  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.798318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.879798  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.879952  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:38.879990  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:38.880085  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.880149  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.881340  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.711277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56762]
I0211 18:53:38.883070  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.322318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56764]
I0211 18:53:38.883070  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25/status: (2.408737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.884048  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.793182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56762]
I0211 18:53:38.885527  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-25.1582640ab1b6ceea: (3.807821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56766]
I0211 18:53:38.885994  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.437114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.886384  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.465604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56764]
I0211 18:53:38.886693  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.886849  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:38.886868  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:38.886939  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.886989  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.888247  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.735746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.889046  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.452893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56766]
I0211 18:53:38.890472  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (1.688909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.891509  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19/status: (3.91648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56764]
I0211 18:53:38.893637  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.818177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56764]
I0211 18:53:38.893642  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (2.627117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.893921  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.894085  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:38.894099  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:38.894198  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.894240  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.895615  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-19.1582640a97b2c888: (7.228487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56768]
I0211 18:53:38.895617  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.189537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56766]
I0211 18:53:38.896038  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (2.016081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.898165  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.29212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56766]
I0211 18:53:38.898464  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23/status: (3.568581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56770]
I0211 18:53:38.899140  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.285797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.900046  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.375231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56766]
I0211 18:53:38.900465  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.567252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56770]
I0211 18:53:38.900753  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.910276  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:38.910348  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:38.910723  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.910825  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.913641  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (12.703848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.914705  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (2.213957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56768]
I0211 18:53:38.915809  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.661674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56774]
I0211 18:53:38.915909  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22/status: (3.435769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56772]
I0211 18:53:38.916446  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.544674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.917653  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.167197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56774]
I0211 18:53:38.917944  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.918009  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.136825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56756]
I0211 18:53:38.918221  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:38.918240  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:38.918310  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.918346  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.920654  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (2.052764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56774]
I0211 18:53:38.920784  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23/status: (2.077347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56768]
I0211 18:53:38.921746  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (2.494899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56776]
I0211 18:53:38.922778  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.664634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56774]
I0211 18:53:38.922954  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.546679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56768]
I0211 18:53:38.923322  123569 preemption_test.go:598] Cleaning up all pods...
I0211 18:53:38.923431  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-23.1582640ab3954a43: (2.729987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56778]
I0211 18:53:38.923696  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.923969  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:38.923981  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:38.924052  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.924094  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.928451  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22/status: (4.039453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56778]
I0211 18:53:38.930845  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.790807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56778]
I0211 18:53:38.931146  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.931290  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-22.1582640ab4925255: (6.313809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56776]
I0211 18:53:38.931409  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:38.931477  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:38.931683  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.931776  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.931807  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (3.404014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56780]
I0211 18:53:38.934636  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (8.634558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56768]
I0211 18:53:38.934810  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17/status: (2.442348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56778]
I0211 18:53:38.935992  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (4.041481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56776]
I0211 18:53:38.937945  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.515949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56778]
I0211 18:53:38.938315  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.938643  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:38.938676  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:38.938848  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.938953  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.939412  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-17.1582640a9768b74d: (6.605663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56780]
I0211 18:53:38.940660  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (5.324441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56768]
I0211 18:53:38.942052  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.862972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56780]
I0211 18:53:38.942447  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20/status: (3.180815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56782]
I0211 18:53:38.943153  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (3.997982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56776]
I0211 18:53:38.944413  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.062047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56782]
I0211 18:53:38.945038  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.945247  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:38.945268  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:38.945441  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.945560  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.946950  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.165391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56776]
I0211 18:53:38.948260  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.754996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56784]
I0211 18:53:38.948260  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16/status: (2.148096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56780]
I0211 18:53:38.948864  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (7.36174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56768]
I0211 18:53:38.949957  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.264664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56784]
I0211 18:53:38.950226  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.950467  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:38.950500  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:38.950679  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.950755  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.953021  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.577413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56784]
I0211 18:53:38.955254  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-20.1582640ab63f8916: (3.394724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56776]
I0211 18:53:38.955497  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (5.426374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56768]
I0211 18:53:38.957487  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20/status: (3.890507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56786]
I0211 18:53:38.959454  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.256259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56786]
I0211 18:53:38.959756  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.960463  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:38.960525  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:38.960915  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (5.047828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56776]
I0211 18:53:38.961852  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.963158  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.963770  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.501073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56784]
I0211 18:53:38.965894  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-16.1582640ab6a45be4: (3.11502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56788]
I0211 18:53:38.967501  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16/status: (2.146662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56784]
I0211 18:53:38.968491  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (7.157633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56786]
I0211 18:53:38.969460  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.24018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56784]
I0211 18:53:38.969775  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.970046  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11
I0211 18:53:38.970098  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11
I0211 18:53:38.970249  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.970338  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.972156  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.447207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56784]
I0211 18:53:38.974647  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (5.699739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56786]
I0211 18:53:38.974950  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11/status: (3.150708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56788]
I0211 18:53:38.976649  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-11.1582640a96243b30: (4.308381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56790]
I0211 18:53:38.976834  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.465242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56788]
I0211 18:53:38.977247  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.977443  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10
I0211 18:53:38.977463  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10
I0211 18:53:38.977543  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.977596  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.980462  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10/status: (2.57764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56790]
I0211 18:53:38.980545  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.442743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56792]
I0211 18:53:38.980619  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (2.059976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56784]
I0211 18:53:38.981781  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (6.791863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56786]
I0211 18:53:38.982360  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (1.26481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56790]
I0211 18:53:38.982684  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.982904  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9
I0211 18:53:38.982960  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9
I0211 18:53:38.983117  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.983226  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.985029  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (1.582304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56790]
I0211 18:53:38.986362  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.603473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56794]
I0211 18:53:38.986645  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (4.489011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56786]
I0211 18:53:38.987732  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9/status: (3.958426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56792]
I0211 18:53:38.989239  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (1.15099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56792]
I0211 18:53:38.989723  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:38.989896  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9
I0211 18:53:38.989948  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9
I0211 18:53:38.990143  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.990235  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.991369  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (4.000307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56794]
I0211 18:53:38.991938  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (1.395967ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56790]
W0211 18:53:38.992186  123569 factory.go:696] A pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9 no longer exists
I0211 18:53:38.992443  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9/status: (1.293947ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56792]
I0211 18:53:38.995533  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (2.094793ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56792]
E0211 18:53:38.995819  123569 scheduler.go:294] Error getting the updated preemptor pod object: pods "ppod-9" not found
I0211 18:53:38.996233  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10
I0211 18:53:38.996375  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10
I0211 18:53:38.996530  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12
I0211 18:53:38.996564  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12
I0211 18:53:38.996546  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (4.26095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56790]
I0211 18:53:38.996649  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-9.1582640ab8e3199e: (4.884907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56796]
I0211 18:53:38.996798  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:38.997008  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:38.998427  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.092359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56794]
I0211 18:53:39.000430  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12/status: (2.920709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56798]
I0211 18:53:39.001578  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (4.004431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56800]
I0211 18:53:39.002835  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.857236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56798]
I0211 18:53:39.003127  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (6.202177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56792]
I0211 18:53:39.003510  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.003727  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:39.003774  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:39.003897  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.003960  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.004409  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.118645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56800]
I0211 18:53:39.007086  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (2.829387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56794]
I0211 18:53:39.007985  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13/status: (3.750382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56798]
I0211 18:53:39.008482  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-13.1582640a9672c76e: (3.037327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56800]
I0211 18:53:39.009208  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (5.397154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56792]
I0211 18:53:39.009927  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.3747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56798]
I0211 18:53:39.010238  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.010487  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15
I0211 18:53:39.010509  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15
I0211 18:53:39.010994  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.011087  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.013409  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.456404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56802]
I0211 18:53:39.014477  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15/status: (3.045551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56798]
I0211 18:53:39.014572  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (3.048298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56794]
I0211 18:53:39.016081  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (1.106576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56798]
I0211 18:53:39.016684  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.016905  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:39.016928  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:39.017008  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.017055  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.017091  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (7.359169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56800]
I0211 18:53:39.019716  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.276844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56802]
I0211 18:53:39.019740  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (2.400849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56800]
I0211 18:53:39.021638  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18/status: (4.0797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56794]
I0211 18:53:39.024848  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (2.746604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56802]
I0211 18:53:39.025329  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.025464  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (7.832249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56804]
I0211 18:53:39.025646  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:39.025693  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:39.025790  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.025946  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.030057  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18/status: (3.740793ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56800]
I0211 18:53:39.030057  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (3.420714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.031136  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-18.1582640abae75063: (4.250501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.031944  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.185634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56800]
I0211 18:53:39.031987  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (5.889172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56802]
I0211 18:53:39.032422  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.032560  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:39.032581  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:39.032675  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.032726  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.034793  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.813001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.035356  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18/status: (2.148392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56810]
I0211 18:53:39.036760  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-18.1582640abae75063: (2.535652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.037497  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.697538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56810]
I0211 18:53:39.037771  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.037955  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:39.038116  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:39.038427  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (5.967541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.040071  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.63061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.041499  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:39.041536  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:39.043053  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (4.21499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.043509  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.551949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.046325  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:39.046380  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:39.048066  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (4.659708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.048768  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.941753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.051761  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:39.051837  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:39.052985  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (4.541017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.053949  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.767291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.056421  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:39.056471  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:39.057966  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (4.526047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.058191  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.434146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.061645  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:39.061686  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:39.062882  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (4.466496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.063670  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.697082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.066407  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:39.066490  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:39.067968  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (4.503148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.068405  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.544956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.071443  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:39.071519  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:39.072915  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (4.500691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.073578  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.745423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.075940  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:39.075979  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:39.077154  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (3.830552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.077953  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.653525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.080312  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:39.080351  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:39.082007  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (4.518246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.082252  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.696054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.085875  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:39.085944  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:39.086208  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (3.795823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.088118  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.694169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.089126  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:39.089163  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:39.090466  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (3.841143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.091129  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.555137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.094146  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:39.094217  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:39.095206  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (4.345508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.095926  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.418572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.098463  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:39.098540  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:39.099761  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (4.09643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.100304  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.458442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.102542  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:39.102578  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:39.103884  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (3.792643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.104087  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.218216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.106743  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:39.106786  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:39.108140  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (3.910974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.108349  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.338923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.111880  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:39.111923  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:39.112416  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (3.884527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.113852  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.54422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.115430  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:39.115505  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:39.116739  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (3.98092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.117745  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.909956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.119200  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:39.119200  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:39.119415  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:39.119588  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:39.119618  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:39.120010  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:39.120065  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:39.121593  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (4.422927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.121711  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.310894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.124747  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:39.124820  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:39.126758  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.683369ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.126836  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (4.954758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56808]
I0211 18:53:39.129766  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:39.129806  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:39.131292  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (4.050525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.131381  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.294629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.134401  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:39.134439  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:39.135703  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (4.039772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.136043  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.371262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.138947  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:39.139018  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:39.140209  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (4.139666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.140940  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.581779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.142923  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:39.142961  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:39.144794  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.489807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.145564  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (5.033201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.148932  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:39.148973  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:39.150847  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (4.899838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.151239  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.879309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.154292  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:39.154340  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:39.155834  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (4.535231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.156357  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.617248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.158966  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:39.159006  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:39.160740  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.367958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.160850  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (4.618942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.164000  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:39.164043  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:39.166010  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (4.587712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.166249  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.680018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.169416  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:39.169475  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:39.170907  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (4.535162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.171391  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.576995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.174102  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:39.174146  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:39.175331  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (4.112804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.175947  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.533505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.178488  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:39.178535  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:39.179901  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (4.23224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.180294  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.423541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.182937  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:39.183022  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:39.184525  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (4.216901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.184994  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.573273ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.187977  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:39.188073  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:39.189044  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (4.124134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.190362  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.948448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.191905  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:39.192038  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:39.193440  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (4.049589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.193909  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.486293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.197835  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0: (4.034435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.199399  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (1.191121ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.204079  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (4.176251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.207067  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (1.079872ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.209983  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (1.058342ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.212576  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (1.003835ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.215323  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (1.076339ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.218654  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (1.716466ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.221378  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (999.268µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.224292  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (1.07572ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.226889  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (999.767µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.229635  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (1.101458ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.232199  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (941.962µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.234701  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (917.198µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.237446  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.063505ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.240020  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (975.748µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.242548  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (923.802µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.256499  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (2.159169ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.261271  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (2.878534ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.264678  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.410522ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.279589  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (13.239622ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.284529  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.68751ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.287564  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.319126ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.290586  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.183741ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.293746  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.505122ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.296534  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.156251ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.299292  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.169986ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.301949  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.071159ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.304922  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.329149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.307749  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.096315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.310878  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.398707ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.315721  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.188776ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.323696  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.209566ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.327153  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.814502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.330224  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.225319ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.333206  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.199463ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.335905  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.078545ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.338732  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (942.246µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.341354  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.033445ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.347218  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.207744ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.350060  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.173871ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.353622  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.756387ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.356432  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.123412ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.359450  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (1.303856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.362239  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.242594ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.365091  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.184914ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.368113  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.346645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.370761  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (981.513µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.373498  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.131208ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.376633  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.3283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.380243  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.890892ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.383446  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.362773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.386333  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.049437ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.388887  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0: (991.265µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.391982  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (1.521301ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.395300  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.527695ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.398266  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.162376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.400002  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0
I0211 18:53:39.400083  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0
I0211 18:53:39.400582  123569 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0", node "node1"
I0211 18:53:39.400696  123569 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0211 18:53:39.400815  123569 factory.go:733] Attempting to bind rpod-0 to node1
I0211 18:53:39.402906  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.96566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.402911  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1
I0211 18:53:39.403010  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1
I0211 18:53:39.403108  123569 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1", node "node1"
I0211 18:53:39.403131  123569 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0211 18:53:39.403163  123569 factory.go:733] Attempting to bind rpod-1 to node1
I0211 18:53:39.404710  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0/binding: (2.788335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.404870  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1/binding: (1.469317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.405049  123569 scheduler.go:571] pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0211 18:53:39.405287  123569 scheduler.go:571] pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0211 18:53:39.406879  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.516083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.409296  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.845502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.506506  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0: (2.762586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.609356  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (1.95439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.609741  123569 preemption_test.go:561] Creating the preemptor pod...
I0211 18:53:39.612308  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.335194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.612380  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:39.612401  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:39.612507  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.612539  123569 preemption_test.go:567] Creating additional pods...
I0211 18:53:39.612544  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.615663  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (2.104612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56820]
I0211 18:53:39.615710  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/status: (2.335657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56812]
I0211 18:53:39.616196  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.40794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56806]
I0211 18:53:39.616503  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (3.08134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56822]
I0211 18:53:39.618910  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.873159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56820]
I0211 18:53:39.619389  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.371435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56822]
I0211 18:53:39.619743  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.622000  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.9487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56820]
I0211 18:53:39.622324  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/status: (2.208748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56822]
I0211 18:53:39.624399  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.797495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56820]
I0211 18:53:39.626514  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.698121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56820]
I0211 18:53:39.627726  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (4.754714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56822]
I0211 18:53:39.628458  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:39.628519  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:39.628987  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.005054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56820]
I0211 18:53:39.629247  123569 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod", node "node1"
I0211 18:53:39.629290  123569 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0211 18:53:39.629346  123569 factory.go:733] Attempting to bind preemptor-pod to node1
I0211 18:53:39.629501  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4
I0211 18:53:39.629522  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4
I0211 18:53:39.629660  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.629722  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.629881  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.600943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56822]
I0211 18:53:39.632474  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/binding: (2.394926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56820]
I0211 18:53:39.632541  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (1.813275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56826]
I0211 18:53:39.632676  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.912997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56830]
I0211 18:53:39.632842  123569 scheduler.go:571] pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0211 18:53:39.633024  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4/status: (2.816783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56822]
I0211 18:53:39.633296  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.196271ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56824]
I0211 18:53:39.635625  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (2.200901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56826]
I0211 18:53:39.635663  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.947776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56824]
I0211 18:53:39.635872  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.636008  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3
I0211 18:53:39.636021  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3
I0211 18:53:39.636237  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.636288  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.637288  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.021314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56830]
I0211 18:53:39.639562  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.496295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56826]
I0211 18:53:39.640009  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (3.201095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56832]
I0211 18:53:39.640403  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.672703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56830]
I0211 18:53:39.641593  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3/status: (3.930691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56828]
I0211 18:53:39.642163  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.025083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56826]
I0211 18:53:39.643568  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (1.340777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56830]
I0211 18:53:39.644016  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.644463  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7
I0211 18:53:39.644478  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7
I0211 18:53:39.644593  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.644661  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.645679  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.952996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56826]
I0211 18:53:39.646477  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (1.533296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56832]
I0211 18:53:39.647033  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7/status: (2.133846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56830]
I0211 18:53:39.647155  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.515486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56834]
I0211 18:53:39.647857  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.593708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56826]
I0211 18:53:39.648484  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (1.026175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56830]
I0211 18:53:39.648716  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.648891  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9
I0211 18:53:39.648908  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9
I0211 18:53:39.649027  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.649080  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.650324  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.995287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56826]
I0211 18:53:39.651289  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.575492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56836]
I0211 18:53:39.651985  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9/status: (2.613367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56830]
I0211 18:53:39.651988  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (2.598933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56832]
I0211 18:53:39.652674  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.745573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56826]
I0211 18:53:39.653711  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (1.028151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56832]
I0211 18:53:39.654151  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.654338  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11
I0211 18:53:39.654417  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11
I0211 18:53:39.654568  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.654664  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.655844  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.499109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56826]
I0211 18:53:39.657434  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.486362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56838]
I0211 18:53:39.657452  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (2.394069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56836]
I0211 18:53:39.657470  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11/status: (2.381592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56832]
I0211 18:53:39.658938  123569 cacher.go:633] cacher (*core.Pod): 1 objects queued in incoming channel.
I0211 18:53:39.659074  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.60461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56826]
I0211 18:53:39.659233  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.385554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56836]
I0211 18:53:39.659521  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.659761  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12
I0211 18:53:39.659781  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12
I0211 18:53:39.659902  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.660124  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.661789  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.311153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56826]
I0211 18:53:39.662306  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.354731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56832]
I0211 18:53:39.663022  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12/status: (1.917688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56842]
I0211 18:53:39.664675  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (3.603618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56840]
I0211 18:53:39.665400  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.175888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56826]
I0211 18:53:39.665404  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.113316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56842]
I0211 18:53:39.665749  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.665890  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:39.665911  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:39.665991  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.666040  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.667971  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.014884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56840]
I0211 18:53:39.668628  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.775599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56848]
I0211 18:53:39.670668  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.245008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56840]
I0211 18:53:39.670992  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13/status: (3.840941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56832]
I0211 18:53:39.671583  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (4.595833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56846]
I0211 18:53:39.673298  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.841997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56840]
I0211 18:53:39.673462  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.421768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56832]
I0211 18:53:39.673759  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.673935  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12
I0211 18:53:39.673954  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12
I0211 18:53:39.674070  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.674123  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.677122  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.285103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56846]
I0211 18:53:39.677269  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12/status: (2.732989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56848]
I0211 18:53:39.677458  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-12.1582640ae13ba1be: (2.43112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56852]
I0211 18:53:39.679057  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.236063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56848]
I0211 18:53:39.679329  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.679506  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:39.679519  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:39.679578  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.679667  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.679769  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.940029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56852]
I0211 18:53:39.681563  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.358263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56852]
I0211 18:53:39.682014  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.275566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56854]
I0211 18:53:39.682748  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20/status: (2.849998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56848]
I0211 18:53:39.683017  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.422163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56856]
I0211 18:53:39.682751  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.927352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56846]
I0211 18:53:39.685582  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.779297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56854]
I0211 18:53:39.685613  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.863215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56850]
I0211 18:53:39.686035  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.686253  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:39.686274  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:39.686438  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.686508  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.688065  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.778507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56854]
I0211 18:53:39.689299  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.930355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56858]
I0211 18:53:39.689306  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19/status: (2.00997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56850]
I0211 18:53:39.690876  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.118105ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56854]
I0211 18:53:39.692420  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (2.363022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56850]
I0211 18:53:39.693484  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.693774  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.840123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56854]
I0211 18:53:39.693833  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:39.693865  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:39.694021  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.694117  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.696821  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24/status: (2.431137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56858]
I0211 18:53:39.698023  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.240344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56862]
I0211 18:53:39.699656  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (5.265005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56850]
I0211 18:53:39.699871  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.440097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56858]
I0211 18:53:39.700284  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.700457  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:39.700483  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (12.731563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56860]
I0211 18:53:39.700499  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:39.700737  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.700845  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.702150  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.106451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56850]
I0211 18:53:39.702927  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27/status: (1.804111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56858]
I0211 18:53:39.703717  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.147117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56864]
I0211 18:53:39.705246  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (5.952901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56862]
I0211 18:53:39.705623  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.256414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56858]
I0211 18:53:39.705820  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.705958  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:39.705972  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:39.706037  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.370092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56864]
I0211 18:53:39.706038  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.706082  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.707930  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28/status: (1.608727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56858]
I0211 18:53:39.708494  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.433549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56866]
I0211 18:53:39.709762  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.152571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56858]
I0211 18:53:39.710073  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (3.294613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56850]
I0211 18:53:39.710083  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.710246  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:39.710267  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:39.710430  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.710526  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.711698  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.935794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56866]
I0211 18:53:39.713402  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.256664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56868]
I0211 18:53:39.714976  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (3.680792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56850]
I0211 18:53:39.715207  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.068985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56866]
I0211 18:53:39.716404  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29/status: (5.115677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56858]
I0211 18:53:39.717753  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.064577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56850]
I0211 18:53:39.717866  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.055085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56858]
I0211 18:53:39.718134  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.718395  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:39.718435  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:39.718566  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.718646  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.720477  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.242272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56850]
I0211 18:53:39.721101  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.193079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56870]
I0211 18:53:39.722418  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-28.1582640ae3f8ec19: (2.838775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56872]
I0211 18:53:39.724080  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28/status: (4.987373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56868]
I0211 18:53:39.725049  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.642755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56850]
I0211 18:53:39.725899  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.40063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56872]
I0211 18:53:39.726280  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.726413  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:39.726423  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:39.726486  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.726552  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.728932  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.496697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56874]
I0211 18:53:39.729886  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.743219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56850]
I0211 18:53:39.730705  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32/status: (2.6244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56872]
I0211 18:53:39.730715  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (2.990921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56870]
I0211 18:53:39.733310  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.223414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56870]
I0211 18:53:39.734003  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.734271  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:39.734319  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:39.734356  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.726946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56874]
I0211 18:53:39.734480  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.734555  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.737113  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.717873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56878]
I0211 18:53:39.737768  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34/status: (2.928925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56870]
I0211 18:53:39.737839  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (2.201719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56876]
I0211 18:53:39.738363  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.354942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56874]
I0211 18:53:39.740311  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (2.074332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56870]
I0211 18:53:39.741002  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.133572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56876]
I0211 18:53:39.741336  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.741472  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:39.741559  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:39.741682  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.741734  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.744648  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32/status: (2.539675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56878]
I0211 18:53:39.744711  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.260928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56870]
I0211 18:53:39.745518  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-32.1582640ae530e14b: (2.435185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56882]
I0211 18:53:39.747439  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.304862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56884]
I0211 18:53:39.747450  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.603775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56870]
I0211 18:53:39.747880  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.748153  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:39.748221  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:39.749327  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.749364  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.947182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56878]
I0211 18:53:39.749378  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.751539  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.562127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56878]
I0211 18:53:39.751889  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.453252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56888]
I0211 18:53:39.751907  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.02919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56882]
I0211 18:53:39.753104  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35/status: (2.997863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56886]
I0211 18:53:39.755646  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.844085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56882]
I0211 18:53:39.755661  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.763224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56886]
I0211 18:53:39.756221  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.756569  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:39.756597  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:39.756768  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.756827  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.758535  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56882]
I0211 18:53:39.760074  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.308776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56892]
I0211 18:53:39.760086  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33/status: (2.741141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56878]
I0211 18:53:39.760503  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (3.052766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56890]
I0211 18:53:39.761579  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.895772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56882]
I0211 18:53:39.763914  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (2.459889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56878]
I0211 18:53:39.764049  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.96837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56882]
I0211 18:53:39.764339  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.764640  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:39.764666  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:39.764774  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.764841  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.766591  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.788107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56878]
I0211 18:53:39.768424  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42/status: (3.036633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56892]
I0211 18:53:39.768427  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.216327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56878]
I0211 18:53:39.769969  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (3.484011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56896]
I0211 18:53:39.770402  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.269769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56878]
I0211 18:53:39.771023  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.771031  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.444362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56898]
I0211 18:53:39.771295  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:39.771334  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:39.771445  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.771542  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.773269  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.471708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56896]
I0211 18:53:39.773842  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45/status: (1.979814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56892]
I0211 18:53:39.774028  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.683623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56900]
I0211 18:53:39.775463  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.177659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56902]
I0211 18:53:39.776424  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.556246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56892]
I0211 18:53:39.776732  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.776886  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:39.776926  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:39.777072  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.777203  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.778031  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.88289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56900]
I0211 18:53:39.778707  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.184535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56892]
I0211 18:53:39.780384  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.467595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56900]
I0211 18:53:39.781706  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47/status: (2.021375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56892]
I0211 18:53:39.783760  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.449018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56900]
I0211 18:53:39.784066  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.784285  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:39.784305  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:39.784439  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.784497  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.785795  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.031131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56896]
I0211 18:53:39.787858  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48/status: (3.043867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56900]
I0211 18:53:39.788655  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (3.577143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56906]
I0211 18:53:39.789835  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.487124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56900]
I0211 18:53:39.790227  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.790466  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:39.790486  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:39.790619  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.790680  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.792572  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.145492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56896]
I0211 18:53:39.793070  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47/status: (1.522397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56906]
I0211 18:53:39.795378  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.726932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56906]
I0211 18:53:39.795855  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-47.1582640ae835ce14: (2.613192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56908]
I0211 18:53:39.796285  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.796553  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:39.796593  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:39.796737  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.796820  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.798815  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.170428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56906]
I0211 18:53:39.799698  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48/status: (2.111436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56896]
I0211 18:53:39.800935  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-48.1582640ae8a58344: (2.771383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.801361  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.076246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56896]
I0211 18:53:39.801725  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.801909  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:39.801925  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:39.802020  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.802072  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.803566  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.150606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.804405  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.633612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56912]
I0211 18:53:39.805006  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49/status: (2.66957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56906]
I0211 18:53:39.806735  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.273771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56912]
I0211 18:53:39.807140  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.807292  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:39.807303  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:39.807380  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.807418  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.809969  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.969284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.810344  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45/status: (2.705478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56912]
I0211 18:53:39.811450  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-45.1582640ae7dfa1c8: (2.848884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56914]
I0211 18:53:39.835015  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (24.119625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56912]
I0211 18:53:39.835426  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.835619  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:39.835651  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:39.835794  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.835846  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.842361  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.65271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.844202  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49/status: (3.45509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56914]
I0211 18:53:39.845491  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-49.1582640ae9b1b68c: (2.546042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.846433  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.698509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56914]
I0211 18:53:39.847428  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.847631  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:39.847652  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:39.847743  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.847879  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.849880  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.663016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.850088  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42/status: (1.834582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56954]
I0211 18:53:39.851840  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.158287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56954]
I0211 18:53:39.852206  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.852366  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:39.852385  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:39.852454  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.852514  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.853429  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-42.1582640ae7796668: (4.464963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56956]
I0211 18:53:39.855027  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (2.194447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.855084  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46/status: (2.353843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56954]
I0211 18:53:39.856833  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.321555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.857062  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.857208  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.471335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56956]
I0211 18:53:39.857212  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:39.857413  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:39.857519  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.857576  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.858876  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.095895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56956]
I0211 18:53:39.859688  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44/status: (1.779713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.861792  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.713789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.862088  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.862390  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:39.862403  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:39.862481  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.862519  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.863944  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (4.912727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56958]
I0211 18:53:39.864617  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46/status: (1.836347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.864539  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.697587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56956]
I0211 18:53:39.866526  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.258632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.866781  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.866918  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:39.866934  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:39.866981  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.867021  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.867939  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-46.1582640aecb355f4: (2.678594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56958]
I0211 18:53:39.869557  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.53516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.869697  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44/status: (2.215317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.870878  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-44.1582640aed009f52: (2.343369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56958]
I0211 18:53:39.871899  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.577332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56910]
I0211 18:53:39.872307  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.872498  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:39.872518  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:39.872627  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.872689  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.875202  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.821497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56962]
I0211 18:53:39.875306  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (2.35052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.876222  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43/status: (3.295005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56958]
I0211 18:53:39.880043  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.383157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.880804  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (3.485953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56962]
I0211 18:53:39.881143  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.882244  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:39.882264  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:39.882377  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.882462  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.884255  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.5066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.884915  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41/status: (2.140396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56962]
I0211 18:53:39.890793  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (7.645331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56964]
I0211 18:53:39.891718  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (4.398144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56962]
I0211 18:53:39.892044  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.892282  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:39.892302  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:39.892411  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.892485  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.896834  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (3.912238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.896951  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35/status: (4.198872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56964]
I0211 18:53:39.898981  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.536412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56964]
I0211 18:53:39.899331  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.899500  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:39.899520  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:39.899597  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.899661  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.899776  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-35.1582640ae68dab53: (6.268661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56966]
I0211 18:53:39.901214  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.174877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.901896  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41/status: (2.010838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56964]
I0211 18:53:39.902827  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-41.1582640aee7c0149: (2.351106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56966]
I0211 18:53:39.904038  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.242085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56964]
I0211 18:53:39.904353  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.904572  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:39.904696  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:39.904845  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.904895  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.907810  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.520608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.907878  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (1.469383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56968]
I0211 18:53:39.907925  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40/status: (2.60076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56966]
I0211 18:53:39.909535  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (1.14807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56968]
I0211 18:53:39.909828  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.910067  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:39.910080  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:39.910206  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.910263  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.912425  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.358088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.912830  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.544232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56968]
I0211 18:53:39.915465  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39/status: (4.384901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56970]
I0211 18:53:39.917157  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.293632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56968]
I0211 18:53:39.917397  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.917685  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:39.917713  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:39.917822  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.917899  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.919701  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.23398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.920716  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.725481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56972]
I0211 18:53:39.921672  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38/status: (2.964525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56968]
I0211 18:53:39.924224  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.988149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56972]
I0211 18:53:39.924705  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.924956  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:39.925023  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:39.925223  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.925325  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.927742  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.42186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.929239  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39/status: (3.419093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56972]
I0211 18:53:39.930238  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-39.1582640af02491e2: (3.397809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56974]
I0211 18:53:39.930809  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.160527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56972]
I0211 18:53:39.932534  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.933054  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:39.933373  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:39.933557  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.934409  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.937044  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (3.072114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56974]
I0211 18:53:39.940516  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-38.1582640af0990570: (6.174622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.941452  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38/status: (4.111565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56976]
I0211 18:53:39.943470  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.335893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.943721  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.943887  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:39.943908  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:39.944036  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.944093  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.945470  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.076123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56974]
I0211 18:53:39.947103  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.388442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56978]
I0211 18:53:39.947638  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37/status: (3.243641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56960]
I0211 18:53:39.953446  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (4.644681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56978]
I0211 18:53:39.953762  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.953992  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:39.954015  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:39.954157  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.954244  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.956434  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.518533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56980]
I0211 18:53:39.956830  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (2.037256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56974]
I0211 18:53:39.957017  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36/status: (2.518983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56978]
I0211 18:53:39.958775  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.222927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56974]
I0211 18:53:39.959028  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.959217  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:39.959238  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:39.959341  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.959395  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.960785  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.036306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56980]
I0211 18:53:39.961250  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37/status: (1.592582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56974]
I0211 18:53:39.962742  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-37.1582640af228be4e: (2.417604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56982]
I0211 18:53:39.962941  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.179175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56974]
I0211 18:53:39.963332  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.963537  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:39.963557  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:39.963684  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.963746  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.965515  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.429571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56980]
I0211 18:53:39.965521  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36/status: (1.529009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56982]
I0211 18:53:39.967048  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.079425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56982]
I0211 18:53:39.967490  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.967591  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-36.1582640af2c3a1aa: (3.038954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56984]
I0211 18:53:39.967704  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:39.967726  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:39.967799  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.967857  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.969505  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.381916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56980]
I0211 18:53:39.970995  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29/status: (2.948466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56982]
I0211 18:53:39.971453  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-29.1582640ae43cd3f0: (2.724801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56986]
I0211 18:53:39.972776  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.366977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56982]
I0211 18:53:39.973218  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.973471  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:39.973491  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:39.973626  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.973693  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.975237  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.200129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56980]
I0211 18:53:39.975879  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31/status: (1.940111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56986]
I0211 18:53:39.976284  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.904605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56988]
I0211 18:53:39.977568  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.162204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56986]
I0211 18:53:39.977872  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.978080  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:39.978099  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:39.978347  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.978404  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.979842  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.157192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56980]
I0211 18:53:39.980428  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30/status: (1.765693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56988]
I0211 18:53:39.981167  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.998139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56990]
I0211 18:53:39.982257  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.383382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56988]
I0211 18:53:39.982457  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.106993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56980]
I0211 18:53:39.982510  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.982666  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:39.982689  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:39.982726  123569 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0211 18:53:39.982769  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.982839  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.984092  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (1.175105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56988]
I0211 18:53:39.984701  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.308019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56992]
I0211 18:53:39.985006  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31/status: (1.763147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56990]
I0211 18:53:39.985821  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (1.22972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56988]
I0211 18:53:39.986502  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-31.1582640af3ec676d: (2.937652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56994]
I0211 18:53:39.987097  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.650851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56990]
I0211 18:53:39.987236  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (1.068608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56988]
I0211 18:53:39.987543  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.987701  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:39.987723  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:39.987867  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.987915  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.989109  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (1.438735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56994]
I0211 18:53:39.989588  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.075455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56992]
I0211 18:53:39.990590  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (1.016139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56994]
I0211 18:53:39.991827  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30/status: (3.032164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56996]
I0211 18:53:39.992529  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-30.1582640af4344a86: (3.831132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56998]
I0211 18:53:39.992767  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (1.374986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56994]
I0211 18:53:39.993588  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.305124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56996]
I0211 18:53:39.993897  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.994212  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:39.994235  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:39.994338  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.994386  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:39.994844  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (1.244001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56998]
I0211 18:53:39.995828  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.228639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56996]
I0211 18:53:39.996504  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27/status: (1.849559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56992]
I0211 18:53:39.996773  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (1.324108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57002]
I0211 18:53:39.997837  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-27.1582640ae3a901aa: (2.698112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56998]
I0211 18:53:39.998567  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.398664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56996]
I0211 18:53:39.998623  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (1.342281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57000]
I0211 18:53:39.998855  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:39.999140  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:39.999164  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:39.999297  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:39.999378  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.000208  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (1.19228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56996]
I0211 18:53:40.001205  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.624403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56998]
I0211 18:53:40.002407  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (1.620128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57008]
I0211 18:53:40.002782  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24/status: (2.22983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56996]
I0211 18:53:40.002806  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-24.1582640ae3426c25: (2.662112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57006]
I0211 18:53:40.004150  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.338475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56998]
I0211 18:53:40.004155  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.012703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57006]
I0211 18:53:40.004487  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.004643  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:40.004684  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:40.004787  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.004832  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.005853  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.19547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56998]
I0211 18:53:40.007331  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.598389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57010]
I0211 18:53:40.007776  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.536349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56998]
I0211 18:53:40.007794  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.469768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57012]
I0211 18:53:40.007777  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26/status: (2.62838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57004]
I0211 18:53:40.009663  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (1.443269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57012]
I0211 18:53:40.009926  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.376076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57004]
I0211 18:53:40.010395  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.010679  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:40.010705  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:40.010810  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.010859  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.011322  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (1.212646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57012]
I0211 18:53:40.013104  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.349667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57012]
I0211 18:53:40.013405  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25/status: (2.312912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57004]
I0211 18:53:40.013444  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (2.208649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57010]
I0211 18:53:40.013680  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.178883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57014]
I0211 18:53:40.015270  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.115643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57004]
I0211 18:53:40.015272  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.682776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57012]
I0211 18:53:40.015546  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.015744  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-0
I0211 18:53:40.015763  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-0
I0211 18:53:40.015878  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.015942  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.016913  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.142633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57004]
I0211 18:53:40.018531  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (1.830579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57010]
I0211 18:53:40.019054  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0/status: (2.356434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57016]
I0211 18:53:40.019374  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.051982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57018]
I0211 18:53:40.019390  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (2.071131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57004]
I0211 18:53:40.020854  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (1.321463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57016]
I0211 18:53:40.021237  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.221128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57018]
I0211 18:53:40.021250  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.021435  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-2
I0211 18:53:40.021449  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-2
I0211 18:53:40.021629  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.021681  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.024689  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.424604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57022]
I0211 18:53:40.024776  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (3.034449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57016]
I0211 18:53:40.024822  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (2.967087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57010]
I0211 18:53:40.025336  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2/status: (3.071571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57020]
I0211 18:53:40.026920  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (1.045145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57022]
I0211 18:53:40.026933  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.426271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57016]
I0211 18:53:40.027226  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.027374  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-8
I0211 18:53:40.027397  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-8
I0211 18:53:40.027698  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.027740  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.029145  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.63383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57022]
I0211 18:53:40.029979  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (1.557997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57024]
I0211 18:53:40.030858  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8/status: (2.442501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57026]
I0211 18:53:40.031037  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.390638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57022]
I0211 18:53:40.031897  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (3.899009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57010]
I0211 18:53:40.032399  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (1.135792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57026]
I0211 18:53:40.032675  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.032837  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:40.032855  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:40.032958  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.407363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57024]
I0211 18:53:40.032960  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.033003  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.035268  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.514208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57030]
I0211 18:53:40.035748  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (2.474186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57026]
I0211 18:53:40.035857  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18/status: (2.324323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57010]
I0211 18:53:40.035748  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (2.008336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57028]
I0211 18:53:40.037835  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.396782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57010]
I0211 18:53:40.038059  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.038192  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.736292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57030]
I0211 18:53:40.038279  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-8
I0211 18:53:40.038302  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-8
I0211 18:53:40.038391  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.038444  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.040234  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (1.586371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57010]
I0211 18:53:40.040668  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.812397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57026]
I0211 18:53:40.041940  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8/status: (2.823615ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57032]
I0211 18:53:40.042383  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-8.1582640af72527ce: (2.943512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57034]
I0211 18:53:40.042389  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.21182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57026]
I0211 18:53:40.043890  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (1.474564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57032]
I0211 18:53:40.044126  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.164662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57034]
I0211 18:53:40.044144  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.044325  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:40.044344  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:40.044460  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.044510  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.045985  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.399119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57032]
I0211 18:53:40.046378  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.232114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57036]
I0211 18:53:40.047158  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18/status: (2.221029ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57010]
I0211 18:53:40.047950  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.449125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57032]
I0211 18:53:40.048824  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-18.1582640af77574e0: (3.394909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57038]
I0211 18:53:40.048955  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.375118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57010]
I0211 18:53:40.049279  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.049491  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:40.049509  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:40.049648  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.049736  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.050788  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (2.342692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57032]
I0211 18:53:40.052099  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (2.130882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57036]
I0211 18:53:40.052201  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23/status: (2.227709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57038]
I0211 18:53:40.052593  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.279897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57032]
I0211 18:53:40.053864  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.203606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57036]
I0211 18:53:40.054230  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.054368  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.104052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57032]
I0211 18:53:40.054344  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (4.020386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57040]
I0211 18:53:40.054810  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:40.054830  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:40.054967  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.055028  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.055847  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.054869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57036]
I0211 18:53:40.056679  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.206049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57042]
I0211 18:53:40.058521  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.497464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57036]
I0211 18:53:40.058702  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20/status: (2.321512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57038]
I0211 18:53:40.059040  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-20.1582640ae2657c08: (2.949564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57044]
I0211 18:53:40.060349  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.19243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57042]
I0211 18:53:40.060354  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.278267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57036]
I0211 18:53:40.060711  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.060950  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:40.060996  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:40.061293  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.061362  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.062300  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.481342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57042]
I0211 18:53:40.062818  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.099566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57044]
I0211 18:53:40.064386  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-13.1582640ae195f72b: (2.239984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57048]
I0211 18:53:40.065122  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13/status: (2.468253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57046]
I0211 18:53:40.066095  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (3.304965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57042]
I0211 18:53:40.067487  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.80556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57048]
I0211 18:53:40.067873  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.068021  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3
I0211 18:53:40.068041  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3
I0211 18:53:40.068201  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.068272  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.069127  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (2.401085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57042]
I0211 18:53:40.070807  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.330071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57042]
I0211 18:53:40.072021  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-3.1582640adfd00c4e: (2.822805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57050]
I0211 18:53:40.072562  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.065327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57042]
I0211 18:53:40.073040  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (4.516552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57044]
I0211 18:53:40.074724  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3/status: (6.229313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57048]
I0211 18:53:40.076750  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (1.468152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57048]
I0211 18:53:40.076967  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.077039  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.40331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57042]
I0211 18:53:40.077144  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:40.077163  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:40.077273  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.077376  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.079520  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.409817ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57052]
I0211 18:53:40.080251  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (2.795347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57048]
I0211 18:53:40.080280  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (2.68012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57050]
I0211 18:53:40.082543  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.088185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57048]
I0211 18:53:40.083995  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21/status: (2.695287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57052]
I0211 18:53:40.085657  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.177188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57052]
I0211 18:53:40.085937  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.086375  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:40.086403  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:40.086508  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.086382  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (3.033462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57048]
I0211 18:53:40.086560  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.088460  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.244522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57054]
I0211 18:53:40.088919  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22/status: (1.971518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57052]
I0211 18:53:40.089717  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.344395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57056]
I0211 18:53:40.090640  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.330818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57052]
I0211 18:53:40.090922  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.091084  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-5
I0211 18:53:40.091123  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-5
I0211 18:53:40.091220  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.091278  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.093125  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (1.346604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57054]
I0211 18:53:40.093823  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.670392ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57058]
I0211 18:53:40.094103  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5/status: (2.596302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57056]
I0211 18:53:40.094855  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (7.944213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57050]
I0211 18:53:40.095868  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (1.320291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57058]
I0211 18:53:40.096302  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.096478  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10
I0211 18:53:40.096503  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10
I0211 18:53:40.096638  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.096695  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.097987  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (2.759051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57050]
I0211 18:53:40.099036  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (1.813239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57054]
I0211 18:53:40.099466  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10/status: (2.173575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57058]
I0211 18:53:40.099923  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.489287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57060]
I0211 18:53:40.100627  123569 preemption_test.go:598] Cleaning up all pods...
I0211 18:53:40.101300  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (1.257867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57058]
I0211 18:53:40.101762  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.101953  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7
I0211 18:53:40.102001  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7
I0211 18:53:40.102139  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.102233  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.104384  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (1.440466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57054]
I0211 18:53:40.105660  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-7.1582640ae04fb56e: (2.572032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57062]
I0211 18:53:40.106134  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (5.213791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57060]
I0211 18:53:40.107562  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7/status: (3.175376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57058]
I0211 18:53:40.110388  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (1.63597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57058]
I0211 18:53:40.111212  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.111503  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:40.111526  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:40.111665  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.111713  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.111890  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (4.990772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57062]
I0211 18:53:40.113703  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.66041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57054]
I0211 18:53:40.113901  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17/status: (1.905287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57058]
I0211 18:53:40.114088  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.381965ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57064]
I0211 18:53:40.116043  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.328935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57064]
I0211 18:53:40.116470  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.116824  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:40.116858  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:40.117125  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.117219  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.118744  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.108066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57064]
I0211 18:53:40.118938  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (6.081778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57062]
I0211 18:53:40.119409  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:40.119482  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:40.119422  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.601517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57054]
I0211 18:53:40.119581  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:40.119766  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:40.119806  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:40.121356  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16/status: (2.319387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57066]
I0211 18:53:40.122877  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.065032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57054]
I0211 18:53:40.123103  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.123382  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4
I0211 18:53:40.123422  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4
I0211 18:53:40.123514  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.123559  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.124046  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (4.111568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57062]
I0211 18:53:40.126552  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4/status: (2.103206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57054]
I0211 18:53:40.127004  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (2.579942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57064]
I0211 18:53:40.127581  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-4.1582640adf6bceea: (2.678046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57062]
I0211 18:53:40.130021  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (5.161559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57070]
I0211 18:53:40.130156  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (1.047303ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57064]
E0211 18:53:40.130385  123569 scheduler.go:294] Error getting the updated preemptor pod object: pods "ppod-4" not found
I0211 18:53:40.130500  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14
I0211 18:53:40.130512  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14
I0211 18:53:40.130637  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.130701  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.133424  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.163979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57068]
I0211 18:53:40.133901  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14/status: (2.636598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57064]
I0211 18:53:40.134930  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (1.51539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57072]
I0211 18:53:40.135267  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (4.928673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57070]
I0211 18:53:40.136127  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (1.717032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57064]
I0211 18:53:40.136400  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.136640  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15
I0211 18:53:40.136699  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15
I0211 18:53:40.136922  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.137015  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.138708  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.270639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57064]
I0211 18:53:40.139536  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (1.660307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57068]
I0211 18:53:40.140566  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15/status: (2.569813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57074]
I0211 18:53:40.140793  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (5.100693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57072]
I0211 18:53:40.142727  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (1.61078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57068]
I0211 18:53:40.143035  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.143302  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:40.143359  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:40.143560  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.143659  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.145951  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.062943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57068]
I0211 18:53:40.147148  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17/status: (2.135435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57076]
I0211 18:53:40.147211  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (6.036044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57064]
I0211 18:53:40.148433  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-17.1582640afc267648: (3.449101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57078]
I0211 18:53:40.148836  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (978.358µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57068]
I0211 18:53:40.149118  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.149330  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:40.149352  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:40.149445  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.149506  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.152268  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (2.200843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57068]
I0211 18:53:40.153015  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-25.1582640af6238e5b: (2.651376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57080]
I0211 18:53:40.153488  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25/status: (2.727512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57078]
I0211 18:53:40.153596  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (5.911464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57064]
I0211 18:53:40.155461  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.46012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57068]
I0211 18:53:40.156296  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.156628  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:40.156678  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:40.157096  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.157636  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.159691  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (5.743261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57080]
I0211 18:53:40.160277  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.536048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57068]
I0211 18:53:40.163227  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-23.1582640af8743194: (3.086309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57080]
I0211 18:53:40.164778  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23/status: (2.747237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57084]
I0211 18:53:40.165876  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (5.291534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57068]
I0211 18:53:40.166667  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.250213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57080]
I0211 18:53:40.167131  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.167361  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:40.167377  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:40.167524  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.167587  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.169655  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.622999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57082]
I0211 18:53:40.170752  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21/status: (2.843026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57080]
I0211 18:53:40.173043  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-21.1582640afa1a6d43: (4.248972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57086]
I0211 18:53:40.173676  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (2.463056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57080]
I0211 18:53:40.174384  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.174633  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11
I0211 18:53:40.174697  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11
I0211 18:53:40.174888  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:40.174913  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:40.175048  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.175100  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.176087  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (9.665903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57068]
I0211 18:53:40.177251  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.250943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57086]
I0211 18:53:40.178020  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (2.691327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57082]
I0211 18:53:40.181892  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26/status: (3.977126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57068]
I0211 18:53:40.182487  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-26.1582640af5c79572: (3.642737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.182929  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (5.499253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57088]
I0211 18:53:40.183965  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.507293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57082]
I0211 18:53:40.184613  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.184814  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14
I0211 18:53:40.184839  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14
I0211 18:53:40.184928  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.184978  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.188894  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (2.613108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57086]
I0211 18:53:40.188938  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14/status: (2.643685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57082]
I0211 18:53:40.189591  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-14.1582640afd47f315: (2.980463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57092]
I0211 18:53:40.190716  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (1.215694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57086]
I0211 18:53:40.191285  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.191425  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15
I0211 18:53:40.191448  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15
I0211 18:53:40.191520  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.191575  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.196061  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-15.1582640afda84b4e: (3.243279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.196414  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (13.14314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.196937  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15/status: (5.05601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57092]
I0211 18:53:40.197480  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (5.057337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57082]
I0211 18:53:40.199569  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (2.075659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57092]
I0211 18:53:40.199904  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.200085  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:40.200128  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:40.200315  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.200398  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.202110  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (5.375186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.203416  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16/status: (1.988154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57082]
I0211 18:53:40.203949  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (2.691858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.204453  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-16.1582640afc7a6ef6: (2.34683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57096]
I0211 18:53:40.205054  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.211006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57082]
I0211 18:53:40.205535  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.205707  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:40.205735  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:40.205806  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.205869  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.208637  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.7032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.208759  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (6.253281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.209988  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-22.1582640afaa69d35: (2.75523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57098]
I0211 18:53:40.213101  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22/status: (6.131205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57096]
I0211 18:53:40.215274  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.69013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57098]
I0211 18:53:40.215369  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (6.220372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.215692  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.219060  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:40.219132  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:40.220715  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (4.929591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.221165  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.651771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.224196  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:40.224247  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:40.225511  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (4.433303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.226748  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.019779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.228535  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:40.228575  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:40.231027  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.185796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.231083  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (5.249503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.234452  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:40.234493  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:40.235892  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (4.423362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.236659  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.778166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.239225  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:40.239267  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:40.240321  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (4.076552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.241237  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.672199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.243300  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:40.243327  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:40.244970  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (4.240558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.245288  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.736676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.249004  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:40.249047  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:40.250629  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (5.342289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.251026  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.646539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.255923  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (4.930978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.255925  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:40.256052  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:40.258126  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.749009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.260246  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:40.260322  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:40.261265  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (4.473805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.263165  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.442075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.264761  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:40.264809  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:40.266992  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (5.349555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.267140  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.053531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.270445  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:40.270476  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:40.271873  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (4.265897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.272255  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.505022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.275738  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:40.275850  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:40.277529  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (5.063542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.278662  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.541145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.281683  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:40.281762  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:40.283142  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (5.099449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.286010  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (3.937718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.287657  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:40.287700  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:40.289106  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (5.61908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.291565  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (3.523435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.292767  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:40.292808  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:40.296093  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (6.147614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.296525  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (3.13793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.301279  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:40.301380  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:40.303325  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (6.75674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.304141  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.367024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.307448  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:40.307541  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:40.311391  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (7.70272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.312591  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (4.704307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.314938  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:40.315010  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:40.316390  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (4.55791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.317138  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.698219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.320650  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:40.320703  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:40.322824  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.814669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.323398  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (6.377622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.331006  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:40.331055  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:40.333661  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (9.435452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.333978  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.552324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.337744  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:40.337791  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:40.339727  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.591972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.340513  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (6.435712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.344296  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:40.344377  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:40.346044  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (5.092232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.348650  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (3.700297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.350345  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:40.350386  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:40.352077  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (4.954206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.352127  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.451656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.355369  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:40.355414  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:40.357754  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.741877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.358210  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (5.730161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.361783  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:40.361829  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:40.363355  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (4.75455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.364524  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.280549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.367331  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:40.367403  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:40.369538  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.776246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.370388  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (6.625179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.374444  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:40.374623  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:40.377232  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (6.260174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.379064  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (3.704109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.381940  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:40.381988  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:40.384048  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.66598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.388825  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (11.098621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.392987  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:40.393060  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:40.400869  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (7.277785ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.404956  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (15.670087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.408843  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:40.408883  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:40.411239  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.615307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.411317  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (5.912622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.415220  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:40.415267  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:40.416826  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (4.904036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.417734  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.023973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.420968  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:40.421074  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:40.422718  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (5.454853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.423085  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.650852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.426302  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:40.426347  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:40.428007  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.420179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.428252  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (5.152516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.433206  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0: (4.460467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.434524  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (983.861µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.439069  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (4.181646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.441776  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (1.068417ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.444477  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (1.102599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.447332  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (1.117514ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.450141  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (1.115976ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.453069  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (1.182183ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.455877  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (1.174774ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.458801  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (1.226212ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.461717  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (1.25444ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.464348  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (1.057031ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.467163  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (1.141866ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.470319  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (1.271786ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.472880  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (974.558µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.475791  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.20988ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.478963  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.525599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.482009  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (1.253015ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.484752  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (1.052595ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.487439  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.090781ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.490330  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.071037ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.493356  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.336125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.495938  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.012298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.498794  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.262539ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.501870  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.159066ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.508325  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (4.677292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.513097  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (2.534171ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.515812  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (986.235µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.518915  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.023877ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.524322  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.664101ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.527994  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.245571ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.533664  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (3.550548ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.540668  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.564745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.544655  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (2.246644ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.549365  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.689892ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.552257  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.342111ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.556310  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.227932ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.559107  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.260137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.561859  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.127231ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.564807  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.254214ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.568119  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.360808ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.571160  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.256293ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.574213  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.307438ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.576807  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (941.985µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.579530  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.066485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.582139  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (891.644µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.584918  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.049277ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.587498  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (965.148µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.590233  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.080666ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.592941  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.036539ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.595845  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.076539ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.598591  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.122575ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.601891  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.432355ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.605188  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0: (1.574023ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.608555  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (1.494173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.612529  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.57128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.615751  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.575108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.615978  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0
I0211 18:53:40.616080  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0
I0211 18:53:40.616254  123569 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0", node "node1"
I0211 18:53:40.616321  123569 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0211 18:53:40.616429  123569 factory.go:733] Attempting to bind rpod-0 to node1
I0211 18:53:40.618431  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0/binding: (1.686557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.618643  123569 scheduler.go:571] pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0211 18:53:40.619265  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1
I0211 18:53:40.619302  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1
I0211 18:53:40.619419  123569 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1", node "node1"
I0211 18:53:40.619434  123569 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0211 18:53:40.619478  123569 factory.go:733] Attempting to bind rpod-1 to node1
I0211 18:53:40.619872  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.567753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.621916  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.867932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.622528  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1/binding: (2.58626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57254]
I0211 18:53:40.622970  123569 scheduler.go:571] pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0211 18:53:40.625006  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.765703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.723114  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0: (2.237817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.825776  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (1.783761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.826479  123569 preemption_test.go:561] Creating the preemptor pod...
I0211 18:53:40.829075  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.123583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.829245  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:40.829260  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:40.829388  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.829440  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.829823  123569 preemption_test.go:567] Creating additional pods...
I0211 18:53:40.832975  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (2.044353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.833057  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.644995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.832984  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/status: (2.765751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.834343  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.367255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57260]
I0211 18:53:40.834630  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.02943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.834895  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.837195  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.375877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.838385  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/status: (3.071787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.840226  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.468077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.842213  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.620739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.844918  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.695112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.847052  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (7.677087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.847654  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:40.847672  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:40.847801  123569 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod", node "node1"
I0211 18:53:40.847813  123569 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0211 18:53:40.847865  123569 factory.go:733] Attempting to bind preemptor-pod to node1
I0211 18:53:40.847995  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4
I0211 18:53:40.848009  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4
I0211 18:53:40.848155  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.848231  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.849084  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.050309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.851016  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (2.056818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57262]
I0211 18:53:40.851502  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/binding: (3.410659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57090]
I0211 18:53:40.851566  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4/status: (3.120614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.851860  123569 scheduler.go:571] pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0211 18:53:40.859332  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (8.548503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57264]
I0211 18:53:40.869038  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (19.037147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.886643  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (6.790596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.889559  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (10.982898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.892061  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.894390  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3
I0211 18:53:40.894420  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3
I0211 18:53:40.894496  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (4.684471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.894613  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.894691  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.894998  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (13.28767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57262]
I0211 18:53:40.899533  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (4.07301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.902802  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (6.976244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57262]
I0211 18:53:40.904990  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3/status: (9.568657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.911360  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (13.71732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57268]
I0211 18:53:40.916047  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (7.632063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.916566  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.917259  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (13.569416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57094]
I0211 18:53:40.923237  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (4.005148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.928376  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (4.437615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.929352  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (16.349599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57268]
I0211 18:53:40.936968  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7
I0211 18:53:40.937031  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7
I0211 18:53:40.937359  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.937460  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.941504  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (7.401078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.942874  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.912068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57272]
I0211 18:53:40.943407  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7/status: (5.413245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57266]
I0211 18:53:40.947953  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (9.126123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57270]
I0211 18:53:40.948792  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (5.934605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57258]
I0211 18:53:40.949468  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (5.464235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57266]
I0211 18:53:40.951641  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.964414  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12
I0211 18:53:40.964435  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12
I0211 18:53:40.964696  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.964813  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.967816  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (14.28953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57270]
I0211 18:53:40.970533  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12/status: (5.152716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57272]
I0211 18:53:40.973290  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.670603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57272]
I0211 18:53:40.973733  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.974393  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (7.982449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:40.977851  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (7.354893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57270]
I0211 18:53:40.980782  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14
I0211 18:53:40.980805  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14
I0211 18:53:40.980987  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.981060  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.988422  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (6.143846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57280]
I0211 18:53:40.989024  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (6.704481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57274]
I0211 18:53:40.989351  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (6.397255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57278]
I0211 18:53:40.993138  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14/status: (10.801324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57272]
I0211 18:53:40.995728  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (15.997316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:40.996449  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (1.734629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57274]
I0211 18:53:40.996750  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:40.997041  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:40.997068  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:40.997213  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:40.997284  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:40.999465  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.820725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:40.999494  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.775727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57280]
I0211 18:53:41.001270  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16/status: (3.7397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57274]
I0211 18:53:41.002128  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.107788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.004000  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (2.241734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57274]
I0211 18:53:41.004396  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.004703  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:41.004726  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:41.005964  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.006053  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.920484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.006112  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.008522  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.475145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.009304  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17/status: (2.904067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57274]
I0211 18:53:41.009535  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.795265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57284]
I0211 18:53:41.010482  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (10.100649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57280]
I0211 18:53:41.012897  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (3.146357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57274]
I0211 18:53:41.012938  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (4.606561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57282]
I0211 18:53:41.013366  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.013770  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-0
I0211 18:53:41.013850  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-0
I0211 18:53:41.014026  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.014116  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.032682  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (17.134509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57288]
I0211 18:53:41.032735  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0/status: (17.604324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.033015  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (16.083051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57286]
I0211 18:53:41.032740  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (17.615127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57284]
I0211 18:53:41.035285  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (1.709715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.035654  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.035950  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.204611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57288]
I0211 18:53:41.035958  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-2
I0211 18:53:41.036128  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-2
I0211 18:53:41.036270  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.036369  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.038661  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.023224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57288]
I0211 18:53:41.039219  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.104328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57292]
I0211 18:53:41.040241  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2/status: (3.594688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.040435  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.346986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57288]
I0211 18:53:41.043057  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (2.258482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.043346  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.043667  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-8
I0211 18:53:41.043698  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-8
I0211 18:53:41.043968  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.044242  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.044764  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.519054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57288]
I0211 18:53:41.053229  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (8.684589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57292]
I0211 18:53:41.053533  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (9.150929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.053730  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (16.453606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57290]
I0211 18:53:41.054246  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8/status: (9.202635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57288]
I0211 18:53:41.055664  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (9.087474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57294]
I0211 18:53:41.056347  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (1.55914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57292]
I0211 18:53:41.056578  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.056852  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12
I0211 18:53:41.056924  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12
I0211 18:53:41.057233  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.057318  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.058356  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.029519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57294]
I0211 18:53:41.059087  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.376092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.059350  123569 backoff_utils.go:79] Backing off 2s
I0211 18:53:41.059946  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12/status: (2.362651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57292]
I0211 18:53:41.060544  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.566586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57294]
I0211 18:53:41.061271  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (970.606µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57292]
I0211 18:53:41.061520  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.061737  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:41.061759  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:41.061846  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-12.1582640b2efeedbe: (3.562966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57296]
I0211 18:53:41.061861  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.062027  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.063287  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.99972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57294]
I0211 18:53:41.064119  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.558241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57292]
I0211 18:53:41.066087  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.816258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57296]
I0211 18:53:41.066125  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.327968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57294]
I0211 18:53:41.066228  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18/status: (3.654339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.068332  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.262417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.068367  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.58879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57296]
I0211 18:53:41.068729  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.068914  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3
I0211 18:53:41.069001  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3
I0211 18:53:41.069113  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.069199  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.070399  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (1.043642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57292]
I0211 18:53:41.072346  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.126999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.072946  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-3.1582640b2ad1a68a: (2.439587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57300]
I0211 18:53:41.073335  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3/status: (3.326963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57298]
I0211 18:53:41.074547  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.775809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.074866  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (1.042009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57300]
I0211 18:53:41.075245  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.075425  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:41.075438  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:41.075531  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.075570  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.077383  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (962.495µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57302]
I0211 18:53:41.077383  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.10019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.078237  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18/status: (1.884811ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57292]
I0211 18:53:41.079424  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-18.1582640b34cadc35: (2.650606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57304]
I0211 18:53:41.079973  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.326368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57292]
I0211 18:53:41.080097  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.595475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57276]
I0211 18:53:41.080364  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.080714  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:41.080728  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:41.080810  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.080844  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.084728  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-16.1582640b30ef1968: (2.662234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57308]
I0211 18:53:41.085530  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (5.002925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57304]
I0211 18:53:41.085551  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16/status: (3.513316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57302]
I0211 18:53:41.086717  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (4.685169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57306]
I0211 18:53:41.087053  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.09041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57308]
I0211 18:53:41.087520  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.553662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57304]
I0211 18:53:41.089138  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.089397  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-5
I0211 18:53:41.089458  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-5
I0211 18:53:41.089650  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.089705  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.091564  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.098153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57304]
I0211 18:53:41.092431  123569 cacher.go:633] cacher (*core.Pod): 2 objects queued in incoming channel.
I0211 18:53:41.093703  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.44681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57310]
I0211 18:53:41.094592  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5/status: (3.847491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57306]
I0211 18:53:41.094657  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.608009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57312]
I0211 18:53:41.094678  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (2.721637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57304]
I0211 18:53:41.096754  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (1.211007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57306]
I0211 18:53:41.097053  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.097226  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9
I0211 18:53:41.097247  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.58314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57310]
I0211 18:53:41.097271  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9
I0211 18:53:41.097752  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.097806  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.099277  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (1.211239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57306]
I0211 18:53:41.099907  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.770279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57310]
I0211 18:53:41.100500  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9/status: (1.483589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57314]
I0211 18:53:41.101208  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.688338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57316]
I0211 18:53:41.101967  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.687181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57310]
I0211 18:53:41.102116  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (1.258729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57314]
I0211 18:53:41.102939  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.103115  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10
I0211 18:53:41.103135  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10
I0211 18:53:41.103270  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.103318  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.109536  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (6.446563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57316]
I0211 18:53:41.109566  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (6.041206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57306]
I0211 18:53:41.109547  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (5.619862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57320]
I0211 18:53:41.109636  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10/status: (5.928411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57318]
I0211 18:53:41.111917  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.885461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57306]
I0211 18:53:41.112419  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (1.190111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57318]
I0211 18:53:41.112708  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.112870  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-1
I0211 18:53:41.112895  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-1
I0211 18:53:41.113022  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.113083  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.114321  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.940786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57306]
I0211 18:53:41.115235  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.268132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57322]
I0211 18:53:41.115533  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (2.01203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57318]
I0211 18:53:41.115809  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1/status: (2.460705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57316]
I0211 18:53:41.116535  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.686622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57306]
I0211 18:53:41.117810  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (1.340491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57318]
I0211 18:53:41.118107  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.118313  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4
I0211 18:53:41.118336  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4
I0211 18:53:41.118415  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.118523  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.119733  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:41.120098  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:41.120105  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:41.120122  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:41.120203  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:41.120525  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4/status: (1.792647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57318]
I0211 18:53:41.120555  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (1.656105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57322]
I0211 18:53:41.120987  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.973844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57306]
I0211 18:53:41.122072  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (1.092064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57318]
I0211 18:53:41.122320  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.122660  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11
I0211 18:53:41.122710  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11
I0211 18:53:41.122718  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-4.1582640b280cb6d6: (3.258876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57324]
I0211 18:53:41.122892  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.122928  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.523479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57306]
I0211 18:53:41.122996  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.124798  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.543167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57322]
I0211 18:53:41.146865  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11/status: (23.599183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57318]
I0211 18:53:41.147902  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (22.571415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57326]
I0211 18:53:41.149232  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.524608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57318]
I0211 18:53:41.149621  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.149857  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:41.149880  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:41.150008  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.150076  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.151670  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.30733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57326]
I0211 18:53:41.152226  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13/status: (1.831019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57322]
I0211 18:53:41.152720  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.97964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57328]
I0211 18:53:41.153887  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.239412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57322]
I0211 18:53:41.154236  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.154444  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-6
I0211 18:53:41.154462  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-6
I0211 18:53:41.154561  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.154645  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.156286  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (1.400797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57326]
I0211 18:53:41.156813  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.504397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57330]
I0211 18:53:41.157116  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6/status: (2.232735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57328]
I0211 18:53:41.159088  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (1.500785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57330]
I0211 18:53:41.159375  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.159592  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7
I0211 18:53:41.159637  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7
I0211 18:53:41.159756  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.159819  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.162503  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (2.40805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57330]
I0211 18:53:41.162555  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7/status: (2.503179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57326]
I0211 18:53:41.162904  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-7.1582640b2d5e22d0: (2.229681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57332]
I0211 18:53:41.164226  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (1.224962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57326]
I0211 18:53:41.164539  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.164790  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15
I0211 18:53:41.164812  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15
I0211 18:53:41.164937  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.165066  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.166511  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (1.137966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57330]
I0211 18:53:41.167068  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15/status: (1.700569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57332]
I0211 18:53:41.167271  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.566203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57334]
I0211 18:53:41.168624  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (1.093221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57332]
I0211 18:53:41.169011  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.169210  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15
I0211 18:53:41.169223  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15
I0211 18:53:41.169322  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.169379  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.170795  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (1.158959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57330]
I0211 18:53:41.171941  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15/status: (2.312745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57332]
I0211 18:53:41.172445  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-15.1582640b3aef44f1: (2.297766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57336]
I0211 18:53:41.173978  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (1.376845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57332]
I0211 18:53:41.175328  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.175654  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4
I0211 18:53:41.175699  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4
I0211 18:53:41.175818  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.175882  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.177484  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (1.325109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57330]
I0211 18:53:41.179273  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4/status: (3.133538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57336]
I0211 18:53:41.179340  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-4.1582640b280cb6d6: (2.709237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57338]
I0211 18:53:41.181043  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (1.345914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57336]
I0211 18:53:41.181370  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.181580  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:41.181614  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:41.181755  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.181817  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.183885  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.495415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.183950  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49/status: (1.923299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57336]
I0211 18:53:41.184355  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (2.315668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57330]
I0211 18:53:41.185503  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.066255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57336]
I0211 18:53:41.185828  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.185989  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3
I0211 18:53:41.186010  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3
I0211 18:53:41.186127  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.186218  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.187824  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (1.335057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.188024  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3/status: (1.527947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57330]
I0211 18:53:41.188054  123569 backoff_utils.go:79] Backing off 2s
I0211 18:53:41.189736  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (1.187764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57330]
I0211 18:53:41.189970  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.190047  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-3.1582640b2ad1a68a: (2.524109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57342]
I0211 18:53:41.190101  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:41.190122  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:41.190224  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.190264  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.191484  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (979.371µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.191961  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49/status: (1.505765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57330]
I0211 18:53:41.193322  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-49.1582640b3beeec74: (2.382587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57344]
I0211 18:53:41.193897  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.492761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57330]
I0211 18:53:41.194328  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.194465  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:41.194483  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:41.194550  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.194595  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.196421  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.105906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.196938  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.371928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57346]
I0211 18:53:41.198551  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48/status: (3.200922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57344]
I0211 18:53:41.200083  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.149638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57346]
I0211 18:53:41.200341  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.200522  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:41.200539  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:41.200659  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.200713  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.202944  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46/status: (1.89658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57346]
I0211 18:53:41.203071  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.347834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.204657  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.255081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.204657  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.219729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57346]
I0211 18:53:41.204925  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.205077  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:41.205098  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:41.205234  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.205293  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.206560  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.042893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.207326  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48/status: (1.813151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57346]
I0211 18:53:41.208384  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-48.1582640b3cb1e1c9: (2.229269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57348]
I0211 18:53:41.208986  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.245522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57346]
I0211 18:53:41.209345  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.209493  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:41.209514  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:41.209587  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.209639  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.211631  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.167969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.212242  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46/status: (2.236636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57348]
I0211 18:53:41.213488  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-46.1582640b3d0f36dd: (2.483307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57350]
I0211 18:53:41.213900  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.164045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57348]
I0211 18:53:41.214280  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.214409  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10
I0211 18:53:41.214423  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10
I0211 18:53:41.214497  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.214536  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.216288  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (1.030321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.216903  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10/status: (2.089938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57350]
I0211 18:53:41.217592  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-10.1582640b37411f94: (2.168821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57352]
I0211 18:53:41.218416  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (1.072846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57350]
I0211 18:53:41.218688  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.218851  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:41.218872  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:41.218985  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.219037  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.221082  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.561159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57354]
I0211 18:53:41.221399  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44/status: (2.084834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.222965  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.140987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.223269  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.223493  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:41.223521  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:41.223624  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.223677  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.225215  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.842077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.226010  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.268723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57358]
I0211 18:53:41.226530  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42/status: (2.516826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57354]
I0211 18:53:41.227464  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (2.69766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57356]
I0211 18:53:41.227995  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.010792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57358]
I0211 18:53:41.228356  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.228488  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (9.240755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57352]
I0211 18:53:41.228556  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:41.228903  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:41.229036  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.229101  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.230656  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.17649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57356]
I0211 18:53:41.232410  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-44.1582640b3e26eaca: (2.714955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.233047  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44/status: (2.113544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57356]
I0211 18:53:41.234704  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.193946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.235021  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.235160  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:41.235204  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:41.235286  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.235339  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.237472  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.226072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57360]
I0211 18:53:41.238767  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-42.1582640b3e6da345: (2.425895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57362]
I0211 18:53:41.238810  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42/status: (2.52195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57340]
I0211 18:53:41.240363  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.150885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57362]
I0211 18:53:41.240693  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.240887  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:41.240908  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:41.241004  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.241060  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.242765  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44/status: (1.464455ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57360]
I0211 18:53:41.242880  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.588427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57362]
I0211 18:53:41.244690  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.548083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57362]
I0211 18:53:41.244692  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-44.1582640b3e26eaca: (2.759522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57364]
I0211 18:53:41.244981  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.245157  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:41.245202  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:41.245429  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.245536  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.246884  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.042003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57360]
I0211 18:53:41.247828  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.685517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57366]
I0211 18:53:41.248287  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26/status: (2.410471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57362]
I0211 18:53:41.250150  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.512845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57366]
I0211 18:53:41.250431  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.250558  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:41.250576  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:41.250679  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.250731  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.252799  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.327963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57360]
I0211 18:53:41.253815  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.402765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57368]
I0211 18:53:41.254303  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41/status: (3.346715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57366]
I0211 18:53:41.256422  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.161966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57368]
I0211 18:53:41.256771  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.256945  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:41.256962  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:41.257065  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.257124  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.258641  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.274489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57368]
I0211 18:53:41.258846  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26/status: (1.455506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57360]
I0211 18:53:41.260631  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.393674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57360]
I0211 18:53:41.260624  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-26.1582640b3fbb252f: (2.660616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57370]
I0211 18:53:41.260842  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.261033  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:41.261053  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:41.261163  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.261242  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.262795  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.212704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57368]
I0211 18:53:41.264101  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41/status: (2.42965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57360]
I0211 18:53:41.265339  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-41.1582640b400a76aa: (3.341221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57372]
I0211 18:53:41.265957  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.193074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57360]
I0211 18:53:41.266271  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.266451  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:41.266470  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:41.266577  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.266650  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.268116  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.251255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57368]
I0211 18:53:41.268810  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39/status: (1.95504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57372]
I0211 18:53:41.269294  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.983422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57374]
I0211 18:53:41.270492  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.062861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57372]
I0211 18:53:41.270804  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.271053  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:41.271077  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:41.271211  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.271270  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.273367  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.431029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57368]
I0211 18:53:41.273370  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.372324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57376]
I0211 18:53:41.274061  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38/status: (2.561414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57374]
I0211 18:53:41.275866  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.286126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57376]
I0211 18:53:41.276125  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.276330  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:41.276354  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:41.276507  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.276569  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.278209  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.187778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57368]
I0211 18:53:41.278650  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39/status: (1.741606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57376]
I0211 18:53:41.280228  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-39.1582640b40fd5f1c: (2.761772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57378]
I0211 18:53:41.280263  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.204935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57376]
I0211 18:53:41.280617  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.280789  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:41.280811  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:41.280934  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.280997  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.282849  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.166024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57368]
I0211 18:53:41.283614  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38/status: (1.908473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57376]
I0211 18:53:41.284094  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-38.1582640b4143dc11: (2.202568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57380]
I0211 18:53:41.285116  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.110217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57376]
I0211 18:53:41.285411  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.285575  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:41.285592  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:41.285713  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.285757  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.287109  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.094179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57368]
I0211 18:53:41.288859  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37/status: (2.871963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57380]
I0211 18:53:41.289125  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.4499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57382]
I0211 18:53:41.290527  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.266705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57380]
I0211 18:53:41.290845  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.290993  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:41.291013  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:41.291110  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.291206  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.292922  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.232629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57382]
I0211 18:53:41.293692  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.78356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0211 18:53:41.293882  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36/status: (2.249605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57368]
I0211 18:53:41.295582  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.220535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0211 18:53:41.295924  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.296223  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:41.296242  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:41.296362  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.296427  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.298068  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.356037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0211 18:53:41.300487  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-37.1582640b4220f2b7: (3.498785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57382]
I0211 18:53:41.300937  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37/status: (3.76587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57386]
I0211 18:53:41.303046  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.261174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57382]
I0211 18:53:41.303366  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.303569  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:41.303591  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:41.303750  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.303814  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.305459  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.344373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0211 18:53:41.306516  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36/status: (2.393977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57382]
I0211 18:53:41.307400  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-36.1582640b42738bfc: (2.449661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57388]
I0211 18:53:41.308072  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.12732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57382]
I0211 18:53:41.308446  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.308673  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:41.308699  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:41.308851  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.308917  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.310495  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.268575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0211 18:53:41.311196  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.610999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57390]
I0211 18:53:41.311989  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34/status: (2.698644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57388]
I0211 18:53:41.314630  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.544947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57390]
I0211 18:53:41.314882  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.315064  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:41.315084  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:41.315224  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.315285  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.316816  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.136403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0211 18:53:41.317535  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.506277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57392]
I0211 18:53:41.318099  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33/status: (2.539809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57390]
I0211 18:53:41.320132  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.471809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57392]
I0211 18:53:41.320476  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.320694  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:41.320711  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:41.320811  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.320855  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.322617  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.282294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0211 18:53:41.322980  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34/status: (1.879817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57392]
I0211 18:53:41.325346  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-34.1582640b43824e1d: (3.043761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57394]
I0211 18:53:41.325723  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.971174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57392]
I0211 18:53:41.326079  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.326434  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:41.326447  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:41.326549  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.326621  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.329359  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (2.523773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0211 18:53:41.329907  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (3.765525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57392]
I0211 18:53:41.330131  123569 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0211 18:53:41.330648  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33/status: (3.566821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57398]
I0211 18:53:41.331486  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-33.1582640b43e37926: (4.083491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57400]
I0211 18:53:41.331887  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (1.483781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57392]
I0211 18:53:41.332126  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.059472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57398]
I0211 18:53:41.332418  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.332653  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:41.332700  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:41.332824  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.332890  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.334361  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (1.839565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57400]
I0211 18:53:41.336254  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (1.390511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57400]
I0211 18:53:41.336427  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32/status: (2.947538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57402]
I0211 18:53:41.336858  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (3.713761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0211 18:53:41.338163  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (1.453915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57400]
I0211 18:53:41.338223  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (4.16735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57404]
I0211 18:53:41.339416  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.575533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57402]
I0211 18:53:41.339706  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.339922  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:41.339943  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:41.339986  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (1.381192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57400]
I0211 18:53:41.340060  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.340153  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.347107  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (6.340361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57406]
I0211 18:53:41.347290  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (6.390831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57408]
I0211 18:53:41.347296  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31/status: (6.856652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57384]
I0211 18:53:41.347296  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (6.889305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57402]
I0211 18:53:41.354316  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (2.094887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57408]
I0211 18:53:41.354635  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (2.41361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57406]
I0211 18:53:41.354644  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.354839  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:41.354865  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:41.354968  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.355020  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.357222  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.678802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0211 18:53:41.358826  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-32.1582640b44f01f62: (2.942482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57414]
I0211 18:53:41.358934  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32/status: (3.215653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57408]
I0211 18:53:41.361020  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.454172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57414]
I0211 18:53:41.361089  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (2.010063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0211 18:53:41.361440  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.361861  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:41.361883  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:41.362069  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.362127  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.363091  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (1.611692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0211 18:53:41.364545  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31/status: (2.060765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57406]
I0211 18:53:41.365020  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (2.165784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57416]
I0211 18:53:41.364990  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (1.45685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57412]
I0211 18:53:41.365693  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-31.1582640b455ec1a9: (2.762927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57418]
I0211 18:53:41.367035  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (1.477511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57416]
I0211 18:53:41.367066  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.515325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57406]
I0211 18:53:41.367322  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.367453  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:41.367491  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:41.367571  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.367646  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.369496  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.43819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57422]
I0211 18:53:41.369580  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.758159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57418]
I0211 18:53:41.370022  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (2.611085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57416]
I0211 18:53:41.370131  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30/status: (2.043836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57420]
I0211 18:53:41.371816  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.259208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57422]
I0211 18:53:41.371931  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.480328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57418]
I0211 18:53:41.372094  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.372332  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:41.372354  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:41.372443  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.372493  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.375470  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.341456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57422]
I0211 18:53:41.376318  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.026669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57424]
I0211 18:53:41.376754  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.752254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57418]
I0211 18:53:41.377403  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29/status: (3.170905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57426]
I0211 18:53:41.378209  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (1.029236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57424]
I0211 18:53:41.379060  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.268792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57426]
I0211 18:53:41.379484  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.379736  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:41.379778  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:41.379922  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.380025  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.380051  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (1.180306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57424]
I0211 18:53:41.381799  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.096933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0211 18:53:41.382899  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.896586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57430]
I0211 18:53:41.383300  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.145323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0211 18:53:41.383504  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30/status: (3.067755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57426]
I0211 18:53:41.383645  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-30.1582640b47023c21: (3.142115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57422]
I0211 18:53:41.385438  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.72781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57428]
I0211 18:53:41.385481  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.478628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57422]
I0211 18:53:41.385767  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.385979  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:41.385995  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:41.386088  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.386124  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.387468  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.505084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57422]
I0211 18:53:41.388425  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.260925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57430]
I0211 18:53:41.389654  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-29.1582640b474c69b4: (2.363041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57434]
I0211 18:53:41.390003  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29/status: (1.995598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57432]
I0211 18:53:41.390220  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.078449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57422]
I0211 18:53:41.391827  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.23577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57434]
I0211 18:53:41.391872  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.274283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57430]
I0211 18:53:41.392162  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.392332  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:41.392352  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:41.392509  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.392586  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.395351  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28/status: (2.51141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57434]
I0211 18:53:41.395976  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (2.022294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57436]
I0211 18:53:41.396132  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (3.781781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57430]
I0211 18:53:41.396328  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.97821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57438]
I0211 18:53:41.397240  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.388414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57434]
I0211 18:53:41.397550  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.397849  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:41.397873  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:41.398097  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.277123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57430]
I0211 18:53:41.398130  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.398212  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.400258  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.425474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57440]
I0211 18:53:41.400781  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (2.210174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57436]
I0211 18:53:41.400934  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24/status: (2.480607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57438]
I0211 18:53:41.402034  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (3.1208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57442]
I0211 18:53:41.402358  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.572429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57440]
I0211 18:53:41.402437  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.067156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57438]
I0211 18:53:41.402777  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.402936  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:41.402949  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:41.403032  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.403081  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.403912  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.070232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57442]
I0211 18:53:41.405709  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.240011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57442]
I0211 18:53:41.406140  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28/status: (1.667875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57444]
I0211 18:53:41.407293  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.072391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57442]
I0211 18:53:41.407636  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-28.1582640b487f0203: (3.667005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57446]
I0211 18:53:41.408122  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (4.835839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57436]
I0211 18:53:41.408759  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.098994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57442]
I0211 18:53:41.409076  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (2.571613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57444]
I0211 18:53:41.409334  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.409533  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11
I0211 18:53:41.409556  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11
I0211 18:53:41.409820  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.409907  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.410518  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.293727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57436]
I0211 18:53:41.412013  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.701801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57446]
I0211 18:53:41.413346  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11/status: (2.570004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57444]
I0211 18:53:41.413729  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.865653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57436]
I0211 18:53:41.415212  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-11.1582640b386d577c: (3.904835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57448]
I0211 18:53:41.416486  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (2.634825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57444]
I0211 18:53:41.416692  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.72822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57436]
I0211 18:53:41.416801  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.416959  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:41.416980  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:41.417073  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.417125  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.419514  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.482625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57452]
I0211 18:53:41.419619  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.487172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57450]
I0211 18:53:41.419527  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.555665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57448]
I0211 18:53:41.420064  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25/status: (2.094217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57446]
I0211 18:53:41.422119  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.13729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57450]
I0211 18:53:41.422677  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.512584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57452]
I0211 18:53:41.422923  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.423137  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:41.423268  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17
I0211 18:53:41.423422  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.423499  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.423795  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.059556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57450]
I0211 18:53:41.425083  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.264669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57452]
I0211 18:53:41.426287  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.511392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57458]
I0211 18:53:41.426417  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-17.1582640b31754ebe: (2.188476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57456]
I0211 18:53:41.428006  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.092126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57458]
I0211 18:53:41.428588  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17/status: (4.442304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57450]
I0211 18:53:41.429696  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.149615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57454]
I0211 18:53:41.430694  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.63501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57450]
I0211 18:53:41.430983  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.431156  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (986.482µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57454]
I0211 18:53:41.431213  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:41.431228  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:41.431314  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.431359  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.434520  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (2.92626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57452]
I0211 18:53:41.434865  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25/status: (3.001318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57460]
I0211 18:53:41.435077  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-25.1582640b49f56e4f: (2.357881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57462]
I0211 18:53:41.435837  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (4.316825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57450]
I0211 18:53:41.436503  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.223485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57460]
I0211 18:53:41.436529  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.157765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57452]
I0211 18:53:41.436865  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.437037  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:41.437061  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:41.437252  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.437579  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.438405  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.460748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57450]
I0211 18:53:41.438795  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (892.711µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57462]
I0211 18:53:41.440008  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.128955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57450]
I0211 18:53:41.440460  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.067512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57464]
I0211 18:53:41.441549  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20/status: (2.075089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57462]
I0211 18:53:41.442287  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.198329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57450]
I0211 18:53:41.443163  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.202792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57462]
I0211 18:53:41.443775  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.443870  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (954.353µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57450]
I0211 18:53:41.443935  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:41.443961  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:41.444036  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.444107  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.445849  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.589715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57462]
I0211 18:53:41.446227  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.656157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57466]
I0211 18:53:41.447358  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27/status: (1.970602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57468]
I0211 18:53:41.447475  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.102465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57462]
I0211 18:53:41.447731  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (3.344804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57464]
I0211 18:53:41.449061  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.143652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57468]
I0211 18:53:41.449450  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.449860  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:41.449884  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:41.449860  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.108633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57464]
I0211 18:53:41.450040  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.450101  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.451551  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.135477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57466]
I0211 18:53:41.452273  123569 preemption_test.go:598] Cleaning up all pods...
I0211 18:53:41.452945  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20/status: (2.431044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57468]
I0211 18:53:41.453321  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-20.1582640b4b2ccebe: (2.245021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57472]
I0211 18:53:41.453379  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (2.3656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57470]
I0211 18:53:41.456153  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (2.595065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57468]
I0211 18:53:41.456491  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.456687  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:41.456704  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:41.456780  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.456817  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.459671  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (7.189646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57466]
I0211 18:53:41.460256  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (2.761479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57472]
I0211 18:53:41.461500  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-27.1582640b4b91268e: (2.965008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57474]
I0211 18:53:41.462014  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27/status: (4.89358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57470]
I0211 18:53:41.464750  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.912913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57474]
I0211 18:53:41.465687  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.465890  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:41.465913  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:41.466012  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.466066  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.467689  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (7.242009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57466]
I0211 18:53:41.470090  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.286256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.470346  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21/status: (3.029292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57472]
I0211 18:53:41.471918  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (5.355888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57474]
I0211 18:53:41.472471  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (4.372143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57466]
I0211 18:53:41.472549  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.565691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57472]
I0211 18:53:41.473582  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.473810  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:41.473831  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:41.474518  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.474591  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.475869  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.051363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.476868  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.475035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57480]
I0211 18:53:41.478747  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (5.200323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57474]
I0211 18:53:41.479010  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23/status: (2.405715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57478]
I0211 18:53:41.480995  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.088097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57478]
I0211 18:53:41.481259  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.481585  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:41.481623  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:41.481897  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.481949  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.483139  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (4.103362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57480]
I0211 18:53:41.483876  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.396429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.485234  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22/status: (2.650358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57478]
I0211 18:53:41.485873  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.794365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57482]
I0211 18:53:41.487259  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.53222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57478]
I0211 18:53:41.487756  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.487912  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:41.487932  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:41.488014  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.488064  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.489341  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (5.765655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57480]
I0211 18:53:41.489619  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.193722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.490613  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23/status: (2.129517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57482]
I0211 18:53:41.492135  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.065957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57484]
I0211 18:53:41.492959  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-23.1582640b4d6244fd: (2.974778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57480]
I0211 18:53:41.494052  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.494306  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:41.494317  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:41.494430  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.494502  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.496543  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (6.391854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57486]
I0211 18:53:41.496776  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22/status: (1.855165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57480]
I0211 18:53:41.497124  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (2.168062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.498159  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.059539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57486]
I0211 18:53:41.498266  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-22.1582640b4dd29476: (2.403628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57488]
I0211 18:53:41.498427  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.498597  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:41.498777  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:41.498951  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.498999  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.500320  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.073082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57486]
I0211 18:53:41.501226  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45/status: (1.860384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.501921  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.820712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57490]
I0211 18:53:41.502339  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (5.323559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57480]
I0211 18:53:41.504103  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (2.203491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.504369  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.504612  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:41.504637  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:41.504786  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.504870  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.506287  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.232617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.506813  123569 backoff_utils.go:79] Backing off 2s
I0211 18:53:41.507339  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (4.557542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57490]
I0211 18:53:41.507372  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13/status: (1.866429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57486]
I0211 18:53:41.509312  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.530855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.509732  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.509936  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:41.509956  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:41.510165  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.510264  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.512937  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-13.1582640b3a0a8e1d: (6.488146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57492]
I0211 18:53:41.513896  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (2.402355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.514536  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35/status: (2.898788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57494]
I0211 18:53:41.516078  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.075499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57494]
I0211 18:53:41.516119  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.195471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57492]
I0211 18:53:41.516294  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (8.509791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57486]
I0211 18:53:41.516350  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.516524  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:41.516545  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:41.516683  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.516734  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.518353  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.364578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.518924  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19/status: (1.804569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57496]
I0211 18:53:41.518998  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.728727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57498]
I0211 18:53:41.520707  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.394343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57496]
I0211 18:53:41.520961  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.521161  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:41.521201  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:41.521312  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.521362  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.522841  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (6.125644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57494]
I0211 18:53:41.523098  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.50232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.525355  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-21.1582640b4ce02c96: (3.294664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0211 18:53:41.528279  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21/status: (6.677226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57496]
I0211 18:53:41.530353  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (6.928414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57476]
I0211 18:53:41.530394  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (1.335369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0211 18:53:41.531491  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.531707  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:41.531717  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:41.531805  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.531845  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.536485  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19/status: (2.788913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57494]
I0211 18:53:41.537253  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-19.1582640b4fe5576b: (4.21552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57504]
I0211 18:53:41.538146  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.338882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57502]
I0211 18:53:41.538481  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (6.547073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57500]
I0211 18:53:41.538930  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.678926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57494]
I0211 18:53:41.539222  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.539383  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:41.539434  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:41.539560  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.539656  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.541523  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.239416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57506]
I0211 18:53:41.541663  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.720994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57504]
I0211 18:53:41.542881  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47/status: (2.922036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57494]
I0211 18:53:41.543780  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (4.915214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57502]
I0211 18:53:41.544469  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.109981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57504]
I0211 18:53:41.544875  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.545067  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:41.545105  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:41.545341  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.545479  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.548257  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (2.021386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57506]
I0211 18:53:41.548416  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43/status: (2.357683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57504]
I0211 18:53:41.548791  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.833518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.550069  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.208175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57504]
I0211 18:53:41.550365  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.550492  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (6.233339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57502]
I0211 18:53:41.550542  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14
I0211 18:53:41.550576  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-14
I0211 18:53:41.550852  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:41.550879  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:41.550979  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.551019  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.552552  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.615124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57506]
I0211 18:53:41.552780  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.18984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57512]
I0211 18:53:41.553383  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43/status: (1.824598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57510]
I0211 18:53:41.555480  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.522164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57510]
I0211 18:53:41.555819  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.556030  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:41.556054  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:41.556337  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.556443  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.557122  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-43.1582640b519b421d: (3.825294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57506]
I0211 18:53:41.557817  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (6.919426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.559056  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45/status: (2.119654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57510]
I0211 18:53:41.559499  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (2.530563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57512]
I0211 18:53:41.560868  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.35165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57510]
I0211 18:53:41.561106  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.561214  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-45.1582640b4ed6c7e9: (2.303432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57506]
I0211 18:53:41.561321  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:41.561366  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:41.561502  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.561555  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.563560  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (5.44898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.564128  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.687158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57514]
I0211 18:53:41.565096  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40/status: (2.774304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57510]
I0211 18:53:41.565928  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (3.491736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57512]
I0211 18:53:41.566903  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (1.412854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57510]
I0211 18:53:41.567252  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.567401  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:41.567422  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:41.567501  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.567557  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.569253  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.499702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57512]
I0211 18:53:41.569863  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (5.330975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.570079  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35/status: (2.047819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57514]
I0211 18:53:41.571104  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-35.1582640b4f829a62: (2.818728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57516]
I0211 18:53:41.572313  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.198346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57512]
I0211 18:53:41.572567  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.572848  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:41.572860  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:41.572950  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:41.572988  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:41.576345  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47/status: (2.616076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57516]
I0211 18:53:41.576855  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (3.156227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57514]
I0211 18:53:41.577074  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-47.1582640b514311e1: (2.698295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.577260  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (6.537853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.579423  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.539424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.579646  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:41.580080  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:41.580140  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-19
I0211 18:53:41.581971  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.458269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.582020  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (4.458367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.585654  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:41.585699  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:41.587216  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (4.813317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.587661  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.648173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.591233  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:41.591277  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:41.592071  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (4.398848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.593363  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.557099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.597619  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:41.597681  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:41.598321  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (4.783923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.600751  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.278989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.601356  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:41.601408  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:41.602851  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (4.196218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.603088  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.402265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.606229  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:41.606306  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:41.607240  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (3.920669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.608523  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.652788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.611773  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:41.611943  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:41.613800  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.532607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.613870  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (5.622117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.618353  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:41.618396  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:41.619890  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (4.992974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.620978  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.256732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.624056  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:41.624089  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:41.625293  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (4.339646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.626212  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.824333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.628071  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:41.628117  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:41.629817  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (4.226232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.630095  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.634154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.633323  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:41.633416  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:41.634669  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (4.410984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.635478  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.610225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.637849  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:41.637956  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:41.639418  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (4.127545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.639734  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.423494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.642240  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:41.642283  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:41.643900  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (4.195386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.644279  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.755758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.647239  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:41.647281  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:41.648636  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (4.248084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.649691  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.516316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.651560  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:41.651613  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:41.653579  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (4.511053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.656663  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (4.681252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.656990  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:41.657020  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:41.658265  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (4.365126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.659054  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.653138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.661542  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:41.661580  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:41.662831  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (4.021565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.663812  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.735901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.666219  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:41.666262  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:41.667776  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (4.228359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.668258  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.628939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.670990  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:41.671053  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:41.672088  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (3.8654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.672723  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.367125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.675440  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:41.675507  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:41.677335  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.520779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.677368  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (4.738886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.680357  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:41.680402  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:41.682439  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (4.771948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.682857  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.135218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.685988  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:41.686038  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:41.687378  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (4.559047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.688004  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.720556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.690292  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:41.690322  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:41.692230  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.473549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.692282  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (4.466981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.696220  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:41.696261  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:41.697476  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (4.953968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.698036  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.522728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.700587  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:41.700762  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:41.702167  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (4.357164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.703407  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.349354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.706300  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:41.706340  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:41.707710  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (4.405051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.708947  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.1697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.714048  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:41.714086  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (5.404795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.714225  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:41.716101  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.581497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.717552  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:41.717618  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:41.720157  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.052364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.720243  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (5.649903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.725159  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:41.725301  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:41.726073  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (5.466938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.728299  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.447953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.730063  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:41.730116  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:41.730970  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (4.169545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.732074  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.564209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.734127  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:41.734244  123569 scheduler.go:449] Skip schedule deleting pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:41.736314  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.596214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.736578  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (5.177598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.741456  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0: (4.504244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.743123  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (1.274458ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.747724  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (4.169198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.750561  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (1.188271ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.753827  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (1.53762ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.756722  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (1.283472ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.759505  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (1.026031ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.762099  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (1.024868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.764813  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (1.026329ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.767504  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (994.912µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.770101  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (984.044µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.772904  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (1.151831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.775931  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (1.45174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.778558  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (1.088967ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.781386  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.136542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.783916  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.048502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.786844  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.07512ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.789513  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (1.045845ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.792136  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (1.045678ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.794890  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.084213ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.797573  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-17: (1.082704ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.800318  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (990.543µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.803038  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-19: (1.147689ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.805705  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.122277ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.808390  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21: (994.059µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.811206  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.313024ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.814020  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.028231ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.816724  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.075684ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.819612  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.188831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.822397  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.069571ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.826089  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.075339ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.828785  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (969.575µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.831341  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (929.739µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.834261  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.066739ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.837072  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.134347ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.839514  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (949.002µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.842055  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (898.055µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.844821  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.10034ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.847425  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (966.528µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.850159  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (876.096µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.852846  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.031872ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.856527  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.166622ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.859064  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (982.473µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.861647  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (960.928µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.864316  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (957.173µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.867033  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.037217ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.869545  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (905.738µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.871956  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (911.83µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.874341  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (762.849µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.876887  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (919.012µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.879527  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (965.438µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.882233  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.133914ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.884936  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.134811ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.887923  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0: (1.317843ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.890620  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (1.133623ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.893266  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (970.766µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.896016  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.078557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.896148  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0
I0211 18:53:41.896163  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0
I0211 18:53:41.896309  123569 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0", node "node1"
I0211 18:53:41.896327  123569 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0211 18:53:41.896373  123569 factory.go:733] Attempting to bind rpod-0 to node1
I0211 18:53:41.898313  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0/binding: (1.678783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.898538  123569 scheduler.go:571] pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0211 18:53:41.898689  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.191256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.898892  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1
I0211 18:53:41.898916  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1
I0211 18:53:41.899058  123569 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1", node "node1"
I0211 18:53:41.899079  123569 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0211 18:53:41.899135  123569 factory.go:733] Attempting to bind rpod-1 to node1
I0211 18:53:41.901097  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1/binding: (1.70663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:41.901223  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.401502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:41.901354  123569 scheduler.go:571] pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0211 18:53:41.903402  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.592204ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.001114  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-0: (1.755304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.103657  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (1.798912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.103953  123569 preemption_test.go:561] Creating the preemptor pod...
I0211 18:53:42.106678  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.532807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.106900  123569 preemption_test.go:567] Creating additional pods...
I0211 18:53:42.109552  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.269759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.112843  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.723121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.117082  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.239181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.119788  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.888842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.119963  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:42.120318  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:42.120343  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:42.120398  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:42.121198  123569 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0211 18:53:42.122046  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.550619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.124142  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.672802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.126271  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.531031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.128383  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.706205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.129118  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:42.129160  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:42.129305  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.129355  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.131078  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.291991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:42.131686  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.83627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.133357  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.949906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:42.133616  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.555928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57518]
I0211 18:53:42.133639  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/status: (2.892361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57526]
I0211 18:53:42.135082  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.051699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:42.135323  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.136537  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.911987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57528]
I0211 18:53:42.137344  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/status: (1.634971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:42.138621  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.506669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57528]
I0211 18:53:42.140314  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.29815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57528]
I0211 18:53:42.142099  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.262694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57528]
I0211 18:53:42.142835  123569 wrap.go:47] DELETE /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/rpod-1: (4.368914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:42.143071  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:42.143093  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod
I0211 18:53:42.143245  123569 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod", node "node1"
I0211 18:53:42.143270  123569 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0211 18:53:42.143329  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:42.143346  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13
I0211 18:53:42.143415  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.143464  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.143850  123569 factory.go:733] Attempting to bind preemptor-pod to node1
I0211 18:53:42.144740  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.871643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57528]
I0211 18:53:42.145734  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.477799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:42.146104  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod/binding: (1.861282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57534]
I0211 18:53:42.146195  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (1.812334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57532]
I0211 18:53:42.146297  123569 scheduler.go:571] pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0211 18:53:42.146699  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.530838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57528]
I0211 18:53:42.147188  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13/status: (2.291917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57536]
I0211 18:53:42.147509  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.226747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57508]
I0211 18:53:42.149463  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.555961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57534]
I0211 18:53:42.149887  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.687405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57532]
I0211 18:53:42.150016  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (2.288947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57536]
I0211 18:53:42.150320  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.150484  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12
I0211 18:53:42.150499  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12
I0211 18:53:42.150574  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.150638  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.151834  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.563514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57532]
I0211 18:53:42.152576  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.455251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0211 18:53:42.153288  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12/status: (2.435404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57534]
I0211 18:53:42.154015  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.587688ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57540]
I0211 18:53:42.154544  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.317044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57532]
I0211 18:53:42.155400  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.285933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57534]
I0211 18:53:42.155717  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.155901  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:42.155922  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16
I0211 18:53:42.156053  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.156133  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.157217  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.105815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57540]
I0211 18:53:42.158270  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.784933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0211 18:53:42.158369  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16/status: (1.988252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57534]
I0211 18:53:42.158764  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.218308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57540]
I0211 18:53:42.160403  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.391112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57542]
I0211 18:53:42.160495  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.232457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57534]
I0211 18:53:42.160740  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.161059  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:42.161082  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:42.161208  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.161273  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.162313  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.47501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57540]
I0211 18:53:42.162967  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.220505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57546]
I0211 18:53:42.164152  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.387738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57548]
I0211 18:53:42.165095  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.665945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57546]
I0211 18:53:42.165192  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18/status: (3.636705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57538]
I0211 18:53:42.167059  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.495257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57540]
I0211 18:53:42.167355  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.767135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57548]
I0211 18:53:42.167678  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.167856  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:42.167882  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20
I0211 18:53:42.168032  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.168084  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.169283  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.808199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57540]
I0211 18:53:42.170733  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.924922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57550]
I0211 18:53:42.171558  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20/status: (2.956527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57548]
I0211 18:53:42.171658  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (2.876779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57552]
I0211 18:53:42.173077  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.236782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57540]
I0211 18:53:42.173088  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-20: (1.085072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57548]
I0211 18:53:42.173497  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.173712  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:42.173733  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:42.173864  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.173939  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.176541  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.732745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57554]
I0211 18:53:42.176855  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.170409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57540]
I0211 18:53:42.177226  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (2.511175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57556]
I0211 18:53:42.177334  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23/status: (3.063969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57550]
I0211 18:53:42.178971  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.566026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57540]
I0211 18:53:42.179337  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.537806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57556]
I0211 18:53:42.179552  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.179728  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:42.179749  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:42.179838  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.179888  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.180738  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.402515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57540]
I0211 18:53:42.182057  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.848243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57554]
I0211 18:53:42.182463  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.450043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.182824  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25/status: (2.707622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57556]
I0211 18:53:42.183014  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.457742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57540]
I0211 18:53:42.184407  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.09636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.184707  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.184877  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:42.184902  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:42.184995  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.185047  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.567295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57554]
I0211 18:53:42.185047  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.186442  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (986.661µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57554]
I0211 18:53:42.186994  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27/status: (1.673363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.187681  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.752646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57562]
I0211 18:53:42.187912  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.35829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57564]
I0211 18:53:42.188920  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.080453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.189218  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.189352  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:42.189369  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:42.189437  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.189485  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.189994  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.637059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57562]
I0211 18:53:42.190991  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.017563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57554]
I0211 18:53:42.191909  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29/status: (1.806051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.191950  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.921945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57566]
I0211 18:53:42.192532  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.074619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57562]
I0211 18:53:42.194628  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.634432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57562]
I0211 18:53:42.195144  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (2.68588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.195436  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.195617  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:42.195638  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:42.195716  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.195768  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.196693  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.4897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57562]
I0211 18:53:42.196977  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (975.355µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.197704  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.176949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57568]
I0211 18:53:42.198246  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31/status: (1.887985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57554]
I0211 18:53:42.198308  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.268664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57562]
I0211 18:53:42.199685  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.075024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57568]
I0211 18:53:42.199897  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.199950  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.284081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.200059  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:42.200081  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:42.200200  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.200240  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.201535  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.237679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.201757  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.022226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57570]
I0211 18:53:42.202166  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.271896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57572]
I0211 18:53:42.202900  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34/status: (2.352263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57568]
I0211 18:53:42.203518  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.519584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.204697  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.203666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57570]
I0211 18:53:42.204925  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.205075  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:42.205092  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:42.205159  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.205250  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.206712  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.236584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57572]
I0211 18:53:42.207288  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.524841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.207414  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36/status: (1.920764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57570]
I0211 18:53:42.207851  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (3.898394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.209005  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.136193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.209326  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.209557  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:42.209573  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:42.209678  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.209719  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.210567  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.202157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.212241  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.427074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57576]
I0211 18:53:42.212673  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (2.685157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57572]
I0211 18:53:42.212948  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.020821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57560]
I0211 18:53:42.213033  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39/status: (2.868934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.216109  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.145745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.216461  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.470697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57572]
I0211 18:53:42.216461  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.216711  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:42.216731  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36
I0211 18:53:42.216876  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.216935  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.218502  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.593313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.218507  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.14259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57578]
I0211 18:53:42.218961  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36/status: (1.784271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57576]
I0211 18:53:42.220384  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-36.1582640b78ef413d: (2.616556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57580]
I0211 18:53:42.220511  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-36: (1.139383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57576]
I0211 18:53:42.220541  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.541223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57578]
I0211 18:53:42.220795  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.220958  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:42.220982  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:42.221095  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.221147  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.223329  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.611557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57584]
I0211 18:53:42.223459  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43/status: (2.063818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.223513  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.823249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57582]
I0211 18:53:42.223952  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (2.900801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57580]
I0211 18:53:42.224938  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.082568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.225250  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.225463  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:42.225483  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:42.225556  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.225619  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.226035  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.601498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57580]
I0211 18:53:42.226825  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (898.795µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.228164  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.280642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57584]
I0211 18:53:42.228329  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.843963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57580]
I0211 18:53:42.228992  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45/status: (2.088609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57586]
I0211 18:53:42.230514  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.12041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57586]
I0211 18:53:42.230685  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods: (1.711852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57580]
I0211 18:53:42.230818  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.230962  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:42.230986  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43
I0211 18:53:42.231111  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.231238  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.233082  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.620966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.233840  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43/status: (2.356921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57586]
I0211 18:53:42.234514  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-43.1582640b79e1d604: (2.516604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57588]
I0211 18:53:42.235733  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-43: (1.416488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57586]
I0211 18:53:42.236073  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.236265  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:42.236286  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:42.236403  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.236465  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.238457  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49/status: (1.737627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57588]
I0211 18:53:42.238699  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.985368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.239015  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.970744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0211 18:53:42.240121  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.136191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57588]
I0211 18:53:42.240449  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.240638  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:42.240785  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:42.240917  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.240975  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.243063  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.565783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.243347  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48/status: (2.048723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0211 18:53:42.243524  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.630558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57592]
I0211 18:53:42.244862  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.136489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0211 18:53:42.245095  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.245464  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:42.245484  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49
I0211 18:53:42.245593  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.245662  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.247272  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (1.331824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.247463  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49/status: (1.584724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0211 18:53:42.248925  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-49: (992.557µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0211 18:53:42.249049  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-49.1582640b7acb8e34: (2.058128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57594]
I0211 18:53:42.249156  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.249332  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:42.249354  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48
I0211 18:53:42.249456  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.249497  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.250873  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (1.176638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0211 18:53:42.251948  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48/status: (2.172424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.252996  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-48.1582640b7b1061dd: (2.582203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57596]
I0211 18:53:42.255046  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-48: (2.294888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57574]
I0211 18:53:42.255381  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.255588  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:42.255623  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45
I0211 18:53:42.255736  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.255783  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.257403  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.378775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0211 18:53:42.257672  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45/status: (1.625352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57596]
I0211 18:53:42.259217  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-45: (1.20268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0211 18:53:42.259486  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.259653  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:42.259674  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:42.259747  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.259782  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.260002  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-45.1582640b7a25d458: (2.030918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57596]
I0211 18:53:42.261581  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.010244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57596]
I0211 18:53:42.262117  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47/status: (1.864586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57590]
I0211 18:53:42.262301  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.597333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57600]
I0211 18:53:42.264125  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.23499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57596]
I0211 18:53:42.264396  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.264597  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:42.264636  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:42.264739  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.264790  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.266684  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.521196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57596]
I0211 18:53:42.267207  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.748693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0211 18:53:42.267248  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46/status: (2.225302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57598]
I0211 18:53:42.268814  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.131735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0211 18:53:42.269136  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.269364  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:42.269401  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47
I0211 18:53:42.269522  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.269587  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.271161  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.298801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57596]
I0211 18:53:42.271765  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47/status: (1.901557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0211 18:53:42.273295  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-47: (1.163373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0211 18:53:42.273787  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.274022  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:42.274042  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46
I0211 18:53:42.274254  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.274311  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.274786  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-47.1582640b7c2f671d: (3.716704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0211 18:53:42.275928  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.347888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0211 18:53:42.277727  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-46.1582640b7c7bc6ed: (2.137806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0211 18:53:42.278012  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46/status: (3.214529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57596]
I0211 18:53:42.279571  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-46: (1.104528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0211 18:53:42.279886  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.280070  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:42.280091  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:42.280251  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.280312  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.282088  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.20789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0211 18:53:42.282867  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.871033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57606]
I0211 18:53:42.283108  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44/status: (2.199595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57604]
I0211 18:53:42.284698  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.114967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57606]
I0211 18:53:42.284996  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.285152  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:42.285197  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39
I0211 18:53:42.285317  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.285382  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.287594  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.423184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0211 18:53:42.287681  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39/status: (2.056382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57606]
I0211 18:53:42.288772  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-39.1582640b793373d3: (2.501059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0211 18:53:42.289472  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-39: (1.064244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57606]
I0211 18:53:42.289881  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.290097  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:42.290116  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44
I0211 18:53:42.290252  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.290309  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.291707  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.152042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0211 18:53:42.292648  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44/status: (2.086802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0211 18:53:42.293117  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-44.1582640b7d689d81: (2.135093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57610]
I0211 18:53:42.295567  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-44: (1.972847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57608]
I0211 18:53:42.295869  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.296075  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:42.296096  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:42.296208  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.296261  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.298560  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.568533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57612]
I0211 18:53:42.298710  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.760483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0211 18:53:42.299080  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42/status: (2.569638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57610]
I0211 18:53:42.300872  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.381354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0211 18:53:42.301202  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.301393  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:42.301414  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:42.301506  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.301556  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.303480  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.306237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57612]
I0211 18:53:42.303977  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41/status: (2.180281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0211 18:53:42.304163  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.85113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57614]
I0211 18:53:42.305654  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.27949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57602]
I0211 18:53:42.305976  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.306161  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:42.306197  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42
I0211 18:53:42.306316  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.306376  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.308096  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.462411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57612]
I0211 18:53:42.308666  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42/status: (2.034744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57614]
I0211 18:53:42.309021  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-42.1582640b7e5bfe0f: (2.043459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57616]
I0211 18:53:42.310475  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-42: (1.184274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57614]
I0211 18:53:42.310894  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.311090  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:42.311106  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41
I0211 18:53:42.311217  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.311264  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.313377  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.662381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57612]
I0211 18:53:42.314471  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41/status: (2.821975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57616]
I0211 18:53:42.314939  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-41.1582640b7eacca09: (2.596492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57618]
I0211 18:53:42.316229  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-41: (1.15844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57616]
I0211 18:53:42.316556  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.316758  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:42.316779  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:42.316865  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.316934  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.318308  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (1.140546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57612]
I0211 18:53:42.318817  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40/status: (1.660576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57618]
I0211 18:53:42.319219  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.459645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57620]
I0211 18:53:42.320904  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (1.147078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57618]
I0211 18:53:42.321193  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.321397  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:42.321420  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:42.321524  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.321589  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.322921  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.001021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57612]
I0211 18:53:42.323915  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.366922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0211 18:53:42.324300  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38/status: (2.436883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57618]
I0211 18:53:42.327288  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.289922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0211 18:53:42.328029  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.328201  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:42.328222  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40
I0211 18:53:42.328343  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.328405  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.330413  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40/status: (1.742309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0211 18:53:42.331765  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-40.1582640b7f97682d: (2.415057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57624]
I0211 18:53:42.332789  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.21182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0211 18:53:42.332944  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (1.372939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57626]
I0211 18:53:42.333296  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-40: (4.60152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57612]
I0211 18:53:42.333440  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.333727  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:42.333747  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38
I0211 18:53:42.333854  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.333911  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.336311  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.723245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0211 18:53:42.336328  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38/status: (1.783761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57624]
I0211 18:53:42.337770  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-38: (1.053417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57624]
I0211 18:53:42.337905  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-38.1582640b7fde73be: (2.461134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57628]
I0211 18:53:42.337991  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.338138  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:42.338184  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34
I0211 18:53:42.338289  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.338339  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.343875  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.429774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0211 18:53:42.344413  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34/status: (1.954185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57624]
I0211 18:53:42.345098  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-34.1582640b78a2df01: (2.359171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57630]
I0211 18:53:42.346038  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-34: (1.10405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57624]
I0211 18:53:42.346370  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.346562  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:42.346580  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:42.346794  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.346853  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.348218  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.059891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0211 18:53:42.348785  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.323716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0211 18:53:42.349205  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37/status: (1.987917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57630]
I0211 18:53:42.350823  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.179956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0211 18:53:42.351099  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.351439  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:42.351458  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31
I0211 18:53:42.351554  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.351621  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.354388  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (2.500442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0211 18:53:42.355009  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31/status: (3.132226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0211 18:53:42.355357  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-31.1582640b785e93bd: (2.406439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57634]
I0211 18:53:42.356468  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-31: (1.056446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57632]
I0211 18:53:42.356793  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.356958  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:42.356977  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37
I0211 18:53:42.357077  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.357142  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.359124  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37/status: (1.665691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57634]
I0211 18:53:42.359669  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.603357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57636]
I0211 18:53:42.360532  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-37.1582640b815fefb0: (2.546786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0211 18:53:42.360591  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-37: (1.05764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57634]
I0211 18:53:42.360876  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.361068  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:42.361086  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:42.361219  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.361271  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.363006  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.543354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0211 18:53:42.363373  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.439774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57638]
I0211 18:53:42.363508  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35/status: (1.957907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57636]
I0211 18:53:42.365268  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.306731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57638]
I0211 18:53:42.365541  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.365750  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:42.365772  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:42.365889  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.365943  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.367783  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.627108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57638]
I0211 18:53:42.367994  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33/status: (1.850625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0211 18:53:42.368159  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.44491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57640]
I0211 18:53:42.369523  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.160966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57622]
I0211 18:53:42.369818  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.370008  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:42.370030  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35
I0211 18:53:42.370195  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.370256  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.371724  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.215648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57638]
I0211 18:53:42.371991  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35/status: (1.508693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57640]
I0211 18:53:42.373527  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-35: (1.051839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57640]
I0211 18:53:42.373784  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.373946  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:42.373966  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33
I0211 18:53:42.374036  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.374095  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.375690  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-35.1582640b823bf446: (4.481982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57642]
I0211 18:53:42.376752  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33/status: (2.235477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57640]
I0211 18:53:42.377083  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.235656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57638]
I0211 18:53:42.378464  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-33: (1.374413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57640]
I0211 18:53:42.378796  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.378914  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:42.378924  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29
I0211 18:53:42.378927  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-33.1582640b82834789: (2.614725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57642]
I0211 18:53:42.378984  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.379030  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.380363  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (1.027897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57638]
I0211 18:53:42.381903  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29/status: (2.655137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57640]
I0211 18:53:42.382210  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-29.1582640b77feb5ef: (2.274092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57644]
I0211 18:53:42.383250  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-29: (995.252µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57640]
I0211 18:53:42.383535  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.383724  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:42.383746  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:42.383895  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.383953  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.385430  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.25109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57638]
I0211 18:53:42.386538  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (2.043246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57646]
I0211 18:53:42.386693  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32/status: (2.492575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57644]
I0211 18:53:42.388298  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.075932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57646]
I0211 18:53:42.388557  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.388812  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:42.388832  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27
I0211 18:53:42.388930  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.388979  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.391405  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27/status: (2.1801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57646]
I0211 18:53:42.391927  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (2.023289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57638]
I0211 18:53:42.392395  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-27.1582640b77bafd09: (2.581332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57648]
I0211 18:53:42.393502  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-27: (1.750447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57646]
I0211 18:53:42.393889  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.394054  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:42.394071  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32
I0211 18:53:42.394238  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.394286  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.395786  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.284295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57638]
I0211 18:53:42.396569  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32/status: (2.080798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57646]
I0211 18:53:42.397299  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-32.1582640b8396104a: (2.072504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57650]
I0211 18:53:42.398112  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-32: (1.099579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57646]
I0211 18:53:42.398396  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.398622  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:42.398642  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:42.398804  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.398862  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.400313  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.218864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57650]
I0211 18:53:42.400740  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.452894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57652]
I0211 18:53:42.401029  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30/status: (1.933145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57638]
I0211 18:53:42.402755  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.261362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57652]
I0211 18:53:42.403048  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.403224  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:42.403246  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25
I0211 18:53:42.403379  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.403441  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.405496  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.271361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57650]
I0211 18:53:42.406137  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25/status: (1.893395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57652]
I0211 18:53:42.407549  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-25.1582640b776c4767: (3.211215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.407976  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-25: (1.419911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57652]
I0211 18:53:42.408354  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.408533  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:42.408555  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30
I0211 18:53:42.408749  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.408807  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.411207  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (2.147756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57650]
I0211 18:53:42.412328  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30/status: (3.272718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.412486  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-30.1582640b84798951: (2.972525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57656]
I0211 18:53:42.415387  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-30: (1.644513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.415679  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.415884  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:42.415898  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:42.415990  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.416057  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.417665  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.233573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.418448  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28/status: (2.016144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57650]
I0211 18:53:42.418896  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.550626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0211 18:53:42.420009  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.099038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57650]
I0211 18:53:42.420327  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.420529  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:42.420579  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:42.420721  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.420781  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.422730  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.556345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.423467  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.937702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57660]
I0211 18:53:42.423811  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26/status: (2.795206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57658]
I0211 18:53:42.425452  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.187151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57660]
I0211 18:53:42.425884  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.426048  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:42.426065  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28
I0211 18:53:42.426206  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.426269  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.427893  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.366323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.428253  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28/status: (1.753614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57660]
I0211 18:53:42.429496  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-28.1582640b857fe0c7: (2.230253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57662]
I0211 18:53:42.430064  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-28: (1.410131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57660]
I0211 18:53:42.430375  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.430550  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:42.430570  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26
I0211 18:53:42.430714  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.430762  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.432614  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.325678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.434662  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/preemptor-pod: (1.150586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.434742  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26/status: (3.757002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57662]
I0211 18:53:42.434986  123569 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0211 18:53:42.435429  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-26.1582640b85c80547: (3.506378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57664]
I0211 18:53:42.436306  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-26: (1.208813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.436535  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.436687  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:42.436705  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23
I0211 18:53:42.436804  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-0: (1.553599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57666]
I0211 18:53:42.436805  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.436860  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.438409  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.282583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57664]
I0211 18:53:42.438963  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23/status: (1.871183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.440234  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-23.1582640b771124ae: (2.665089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57670]
I0211 18:53:42.440298  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-1: (1.117274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57664]
I0211 18:53:42.440912  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-23: (1.186937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.441249  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.441410  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:42.441431  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:42.441525  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.441580  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.442614  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-2: (1.962108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57664]
I0211 18:53:42.443103  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.164891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57668]
I0211 18:53:42.444127  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.964832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57672]
I0211 18:53:42.444564  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-3: (1.436154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57664]
I0211 18:53:42.445684  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24/status: (3.78793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.445987  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-4: (1.056586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57672]
I0211 18:53:42.447282  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.057508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.447491  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-5: (956.856µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57672]
I0211 18:53:42.447505  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.447917  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:42.447977  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:42.448071  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.448127  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.449272  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-6: (1.375028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.449819  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.304577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57668]
I0211 18:53:42.450545  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.779437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57676]
I0211 18:53:42.450967  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-7: (1.100882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57678]
I0211 18:53:42.451766  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22/status: (2.109116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.452449  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-8: (1.070691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57674]
I0211 18:53:42.454403  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.569683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.454905  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.455082  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:42.455101  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24
I0211 18:53:42.455218  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.455299  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.455699  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-9: (975.972µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57674]
I0211 18:53:42.457626  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-10: (1.595858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57674]
I0211 18:53:42.458341  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (2.813788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57668]
I0211 18:53:42.458711  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-24.1582640b87056006: (2.796075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57680]
I0211 18:53:42.459281  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24/status: (3.759387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57654]
I0211 18:53:42.459656  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-11: (1.101259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57674]
I0211 18:53:42.460858  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-24: (1.097293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57680]
I0211 18:53:42.461221  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.461340  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-12: (1.33336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57674]
I0211 18:53:42.461381  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:42.461401  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22
I0211 18:53:42.461501  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.461539  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.462817  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.021738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57668]
I0211 18:53:42.463331  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22/status: (1.564647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57680]
I0211 18:53:42.464942  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-22: (1.199152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57680]
I0211 18:53:42.465159  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.464953  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-22.1582640b87692f5f: (2.373865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57682]
I0211 18:53:42.465349  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:42.465370  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18
I0211 18:53:42.465495  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.465540  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.466282  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-13: (3.146249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57668]
I0211 18:53:42.466826  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (1.092736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57682]
I0211 18:53:42.467102  123569 backoff_utils.go:79] Backing off 2s
I0211 18:53:42.467445  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18/status: (1.695429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57680]
I0211 18:53:42.467786  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-14: (971.989µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57668]
I0211 18:53:42.468263  123569 wrap.go:47] PATCH /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events/ppod-18.1582640b765036ce: (2.062945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57684]
I0211 18:53:42.468834  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-18: (975.925µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57680]
I0211 18:53:42.469105  123569 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0211 18:53:42.469231  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-15: (1.11825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57668]
I0211 18:53:42.469438  123569 scheduling_queue.go:868] About to try and schedule pod preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:42.469459  123569 scheduler.go:453] Attempting to schedule pod: preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21
I0211 18:53:42.469561  123569 factory.go:647] Unable to schedule preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0211 18:53:42.469640  123569 factory.go:742] Updating pod condition for preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0211 18:53:42.470767  123569 wrap.go:47] GET /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-16: (1.222444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57684]
I0211 18:53:42.471814  123569 wrap.go:47] PUT /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/pods/ppod-21/status: (1.980931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57682]
I0211 18:53:42.471854  123569 wrap.go:47] POST /api/v1/namespaces/preemption-race56bd81da-2e2e-11e9-aa1d-0242ac110002/events: (1.530723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.