This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 622 succeeded
Started2019-02-10 16:26
Elapsed27m46s
Revision
Buildergke-prow-containerd-pool-99179761-9sg5
pod884a0590-2d50-11e9-8fd4-0a580a6c0716
infra-commit565f2817c
pod884a0590-2d50-11e9-8fd4-0a580a6c0716
repok8s.io/kubernetes
repo-commitb862590bfcd40d5f82f2a8e1d5d3d7b147b82d55
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 21s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0210 16:46:16.868793  123305 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0210 16:46:16.868908  123305 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0210 16:46:16.868933  123305 master.go:272] Node port range unspecified. Defaulting to 30000-32767.
I0210 16:46:16.868954  123305 master.go:228] Using reconciler: 
I0210 16:46:16.871224  123305 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.871378  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.871479  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.871563  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.871654  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.872624  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.872688  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.873124  123305 store.go:1310] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0210 16:46:16.873208  123305 reflector.go:170] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0210 16:46:16.874664  123305 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.874901  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.874920  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.874956  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.874996  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.875379  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.875436  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.875450  123305 store.go:1310] Monitoring events count at <storage-prefix>//events
I0210 16:46:16.875507  123305 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.875580  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.875590  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.875618  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.875650  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.876436  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.876463  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.876778  123305 store.go:1310] Monitoring limitranges count at <storage-prefix>//limitranges
I0210 16:46:16.876811  123305 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.876880  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.876898  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.876926  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.876925  123305 reflector.go:170] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0210 16:46:16.876971  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.877359  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.877427  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.877645  123305 store.go:1310] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0210 16:46:16.877790  123305 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.877859  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.877880  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.877918  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.877961  123305 reflector.go:170] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0210 16:46:16.878088  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.878311  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.878560  123305 store.go:1310] Monitoring secrets count at <storage-prefix>//secrets
I0210 16:46:16.878710  123305 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.878788  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.878808  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.878844  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.878876  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.878897  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.878907  123305 reflector.go:170] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0210 16:46:16.879156  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.879210  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.879378  123305 store.go:1310] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0210 16:46:16.879523  123305 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.879551  123305 reflector.go:170] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0210 16:46:16.879586  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.879597  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.879623  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.879654  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.880048  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.880307  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.880348  123305 store.go:1310] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0210 16:46:16.880370  123305 reflector.go:170] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0210 16:46:16.880518  123305 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.881469  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.881764  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.881812  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.881869  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.882137  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.882223  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.882419  123305 store.go:1310] Monitoring configmaps count at <storage-prefix>//configmaps
I0210 16:46:16.882511  123305 reflector.go:170] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0210 16:46:16.882585  123305 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.882690  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.882711  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.882784  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.882859  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.883140  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.883251  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.883413  123305 store.go:1310] Monitoring namespaces count at <storage-prefix>//namespaces
I0210 16:46:16.883525  123305 reflector.go:170] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0210 16:46:16.883646  123305 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.883776  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.883796  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.883827  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.883883  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.884246  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.884291  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.884804  123305 store.go:1310] Monitoring endpoints count at <storage-prefix>//endpoints
I0210 16:46:16.884841  123305 reflector.go:170] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0210 16:46:16.884963  123305 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.885045  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.885063  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.885096  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.885157  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.885455  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.885756  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.886005  123305 store.go:1310] Monitoring nodes count at <storage-prefix>//nodes
I0210 16:46:16.886045  123305 reflector.go:170] Listing and watching *core.Node from storage/cacher.go:/nodes
I0210 16:46:16.886213  123305 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.886284  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.886316  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.886364  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.886413  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.886757  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.886786  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.887093  123305 store.go:1310] Monitoring pods count at <storage-prefix>//pods
I0210 16:46:16.887206  123305 reflector.go:170] Listing and watching *core.Pod from storage/cacher.go:/pods
I0210 16:46:16.887252  123305 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.887331  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.887344  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.887372  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.887409  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.888723  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.888797  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.888950  123305 store.go:1310] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0210 16:46:16.889008  123305 reflector.go:170] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0210 16:46:16.889149  123305 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.889263  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.889297  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.889330  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.889369  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.889631  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.889973  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.890078  123305 store.go:1310] Monitoring services count at <storage-prefix>//services
I0210 16:46:16.890107  123305 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.890207  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.890219  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.890257  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.890325  123305 reflector.go:170] Listing and watching *core.Service from storage/cacher.go:/services
I0210 16:46:16.890514  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.890762  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.890792  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.890878  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.890897  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.890931  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.890969  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.891204  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.891268  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.891362  123305 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.891429  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.891446  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.891483  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.891535  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.891711  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.891753  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.892139  123305 store.go:1310] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0210 16:46:16.892182  123305 reflector.go:170] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0210 16:46:16.903365  123305 master.go:407] Skipping disabled API group "auditregistration.k8s.io".
I0210 16:46:16.903405  123305 master.go:415] Enabling API group "authentication.k8s.io".
I0210 16:46:16.903428  123305 master.go:415] Enabling API group "authorization.k8s.io".
I0210 16:46:16.903634  123305 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.903750  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.903769  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.903806  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.903852  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.904111  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.904144  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.904399  123305 store.go:1310] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0210 16:46:16.904439  123305 reflector.go:170] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0210 16:46:16.904567  123305 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.904634  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.904646  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.904675  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.904710  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.904940  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.904986  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.905178  123305 store.go:1310] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0210 16:46:16.905246  123305 reflector.go:170] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0210 16:46:16.905307  123305 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.905367  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.905379  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.905406  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.905459  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.905801  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.906041  123305 store.go:1310] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0210 16:46:16.906063  123305 master.go:415] Enabling API group "autoscaling".
I0210 16:46:16.906223  123305 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.906302  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.906320  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.906352  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.906436  123305 reflector.go:170] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0210 16:46:16.906443  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.906537  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.906783  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.906869  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.907108  123305 store.go:1310] Monitoring jobs.batch count at <storage-prefix>//jobs
I0210 16:46:16.907258  123305 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.907334  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.907352  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.907380  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.907428  123305 reflector.go:170] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0210 16:46:16.907581  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.907889  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.908142  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.908249  123305 store.go:1310] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0210 16:46:16.908278  123305 master.go:415] Enabling API group "batch".
I0210 16:46:16.908341  123305 reflector.go:170] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0210 16:46:16.908412  123305 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.908509  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.908527  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.908561  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.908600  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.908861  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.909134  123305 store.go:1310] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0210 16:46:16.909151  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.909160  123305 master.go:415] Enabling API group "certificates.k8s.io".
I0210 16:46:16.909193  123305 reflector.go:170] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0210 16:46:16.909291  123305 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.909357  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.909379  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.909408  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.909453  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.909695  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.909779  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.909958  123305 store.go:1310] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0210 16:46:16.910006  123305 reflector.go:170] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0210 16:46:16.910101  123305 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.910191  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.910203  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.910232  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.910265  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.910451  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.910479  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.911396  123305 store.go:1310] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0210 16:46:16.911420  123305 master.go:415] Enabling API group "coordination.k8s.io".
I0210 16:46:16.911583  123305 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.911641  123305 reflector.go:170] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0210 16:46:16.911672  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.911683  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.911714  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.911819  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.912206  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.912328  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.912455  123305 store.go:1310] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0210 16:46:16.912598  123305 reflector.go:170] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0210 16:46:16.912608  123305 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.912673  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.912685  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.912713  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.912753  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.912963  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.913247  123305 store.go:1310] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0210 16:46:16.913327  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.913378  123305 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.913431  123305 reflector.go:170] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0210 16:46:16.913453  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.913465  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.913521  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.913569  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.913850  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.913923  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.914714  123305 store.go:1310] Monitoring deployments.apps count at <storage-prefix>//deployments
I0210 16:46:16.914797  123305 reflector.go:170] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0210 16:46:16.914851  123305 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.914943  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.914961  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.915002  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.915067  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.915320  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.915428  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.915660  123305 store.go:1310] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0210 16:46:16.915785  123305 reflector.go:170] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0210 16:46:16.915835  123305 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.915910  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.915921  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.915948  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.915984  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.916427  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.916683  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.916797  123305 store.go:1310] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0210 16:46:16.916832  123305 reflector.go:170] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0210 16:46:16.916937  123305 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.917031  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.917048  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.917076  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.917127  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.917644  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.917674  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.917995  123305 store.go:1310] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0210 16:46:16.918044  123305 reflector.go:170] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0210 16:46:16.918129  123305 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.918223  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.918236  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.918262  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.918331  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.918600  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.918648  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.918893  123305 store.go:1310] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0210 16:46:16.918923  123305 master.go:415] Enabling API group "extensions".
I0210 16:46:16.918931  123305 reflector.go:170] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0210 16:46:16.919141  123305 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.919226  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.919265  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.919300  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.919348  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.919560  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.920159  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.920520  123305 store.go:1310] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0210 16:46:16.920548  123305 master.go:415] Enabling API group "networking.k8s.io".
I0210 16:46:16.920645  123305 reflector.go:170] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0210 16:46:16.920692  123305 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.920780  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.920801  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.920842  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.920888  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.921263  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.921322  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.921743  123305 store.go:1310] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0210 16:46:16.921781  123305 reflector.go:170] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0210 16:46:16.921880  123305 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.921966  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.922008  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.922045  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.922095  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.922774  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.922854  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.923051  123305 store.go:1310] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0210 16:46:16.923070  123305 master.go:415] Enabling API group "policy".
I0210 16:46:16.923101  123305 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.923151  123305 reflector.go:170] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0210 16:46:16.923233  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.923257  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.923299  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.923401  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.924324  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.924377  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.924670  123305 store.go:1310] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0210 16:46:16.924763  123305 reflector.go:170] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0210 16:46:16.924807  123305 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.924885  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.924905  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.924939  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.925077  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.925342  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.925377  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.925595  123305 store.go:1310] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0210 16:46:16.925631  123305 reflector.go:170] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0210 16:46:16.925634  123305 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.925716  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.925734  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.925776  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.925820  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.926201  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.926283  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.926403  123305 store.go:1310] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0210 16:46:16.926436  123305 reflector.go:170] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0210 16:46:16.926569  123305 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.926654  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.926674  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.926707  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.926835  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.927173  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.927527  123305 store.go:1310] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0210 16:46:16.927580  123305 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.927649  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.927669  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.927700  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.927766  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.927799  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.927888  123305 reflector.go:170] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0210 16:46:16.928659  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.928697  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.928876  123305 store.go:1310] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0210 16:46:16.928907  123305 reflector.go:170] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0210 16:46:16.929031  123305 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.929112  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.929131  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.929159  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.929208  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.929423  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.929658  123305 store.go:1310] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0210 16:46:16.929698  123305 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.929771  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.929791  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.929821  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.929834  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.929869  123305 reflector.go:170] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0210 16:46:16.929926  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.930284  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.930365  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.930543  123305 store.go:1310] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0210 16:46:16.930601  123305 reflector.go:170] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0210 16:46:16.930674  123305 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.930752  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.930770  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.930798  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.930832  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.931061  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.931090  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.931383  123305 store.go:1310] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0210 16:46:16.931437  123305 master.go:415] Enabling API group "rbac.authorization.k8s.io".
I0210 16:46:16.931578  123305 reflector.go:170] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0210 16:46:16.933076  123305 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.933194  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.933230  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.933272  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.933330  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.933680  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.933906  123305 store.go:1310] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0210 16:46:16.933931  123305 master.go:415] Enabling API group "scheduling.k8s.io".
I0210 16:46:16.933945  123305 master.go:407] Skipping disabled API group "settings.k8s.io".
I0210 16:46:16.934073  123305 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.934152  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.934185  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.934214  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.934303  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.934336  123305 reflector.go:170] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0210 16:46:16.934600  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.935849  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.936238  123305 store.go:1310] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0210 16:46:16.936279  123305 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.936347  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.936367  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.936391  123305 reflector.go:170] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0210 16:46:16.936399  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.936520  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.936567  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.937434  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.937517  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.937703  123305 store.go:1310] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0210 16:46:16.937792  123305 reflector.go:170] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0210 16:46:16.938143  123305 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.938259  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.938286  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.938333  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.938402  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.938719  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.938798  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.938976  123305 store.go:1310] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0210 16:46:16.939000  123305 reflector.go:170] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0210 16:46:16.939118  123305 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.939202  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.939220  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.939246  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.939292  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.939485  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.939540  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.939702  123305 store.go:1310] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0210 16:46:16.939725  123305 master.go:415] Enabling API group "storage.k8s.io".
I0210 16:46:16.939742  123305 reflector.go:170] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0210 16:46:16.939852  123305 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.939922  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.939940  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.939972  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.940038  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.940274  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.940315  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.940569  123305 store.go:1310] Monitoring deployments.apps count at <storage-prefix>//deployments
I0210 16:46:16.940703  123305 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.940775  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.940790  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.940817  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.940868  123305 reflector.go:170] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0210 16:46:16.941006  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.941227  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.941255  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.941530  123305 store.go:1310] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0210 16:46:16.941644  123305 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.941713  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.941725  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.941754  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.941800  123305 reflector.go:170] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0210 16:46:16.941989  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.942177  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.942309  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.942402  123305 store.go:1310] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0210 16:46:16.942573  123305 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.942650  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.942662  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.942689  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.942739  123305 reflector.go:170] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0210 16:46:16.942853  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.943055  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.943135  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.943296  123305 store.go:1310] Monitoring deployments.apps count at <storage-prefix>//deployments
I0210 16:46:16.943362  123305 reflector.go:170] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0210 16:46:16.943426  123305 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.943514  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.943533  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.943596  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.943656  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.943832  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.943874  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.944114  123305 store.go:1310] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0210 16:46:16.944227  123305 reflector.go:170] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0210 16:46:16.944288  123305 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.944357  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.944377  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.944403  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.944451  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.944699  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.944727  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.944963  123305 store.go:1310] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0210 16:46:16.944983  123305 reflector.go:170] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0210 16:46:16.945100  123305 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.945191  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.945218  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.945244  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.945294  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.945534  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.945561  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.946194  123305 store.go:1310] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0210 16:46:16.946444  123305 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.946544  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.946565  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.946602  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.946231  123305 reflector.go:170] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0210 16:46:16.946799  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.947068  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.947340  123305 store.go:1310] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0210 16:46:16.947471  123305 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.947560  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.947591  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.947704  123305 reflector.go:170] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0210 16:46:16.947751  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.947840  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.947903  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.948565  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.948652  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.948830  123305 store.go:1310] Monitoring deployments.apps count at <storage-prefix>//deployments
I0210 16:46:16.948885  123305 reflector.go:170] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0210 16:46:16.948974  123305 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.949085  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.949144  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.949249  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.949293  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.949602  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.949840  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.949913  123305 store.go:1310] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0210 16:46:16.949997  123305 reflector.go:170] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0210 16:46:16.950083  123305 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.950244  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.950279  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.950314  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.950353  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.951038  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.951075  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.951449  123305 store.go:1310] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0210 16:46:16.951575  123305 reflector.go:170] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0210 16:46:16.951612  123305 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.951682  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.951698  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.951723  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.951794  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.952103  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.952427  123305 store.go:1310] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0210 16:46:16.952552  123305 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.952602  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.952614  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.952640  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.952822  123305 reflector.go:170] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0210 16:46:16.952876  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.953560  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.953820  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.953873  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.954040  123305 store.go:1310] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0210 16:46:16.954056  123305 master.go:415] Enabling API group "apps".
I0210 16:46:16.954084  123305 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.954118  123305 reflector.go:170] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0210 16:46:16.954149  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.954159  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.954200  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.954243  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.955605  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.955659  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.955922  123305 store.go:1310] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0210 16:46:16.955953  123305 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.955974  123305 reflector.go:170] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0210 16:46:16.956024  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.956034  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.956063  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.956210  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.956422  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.956511  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.956684  123305 store.go:1310] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0210 16:46:16.956700  123305 master.go:415] Enabling API group "admissionregistration.k8s.io".
I0210 16:46:16.956744  123305 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"3e6dd8f8-f376-4df8-bbd9-337b61061646", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0210 16:46:16.956867  123305 reflector.go:170] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0210 16:46:16.956906  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:16.956916  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:16.956944  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:16.957055  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.957548  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:16.957605  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:16.957656  123305 store.go:1310] Monitoring events count at <storage-prefix>//events
I0210 16:46:16.957684  123305 master.go:415] Enabling API group "events.k8s.io".
W0210 16:46:16.963756  123305 genericapiserver.go:330] Skipping API batch/v2alpha1 because it has no resources.
W0210 16:46:16.979045  123305 genericapiserver.go:330] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0210 16:46:16.979673  123305 genericapiserver.go:330] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0210 16:46:16.981820  123305 genericapiserver.go:330] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0210 16:46:16.998341  123305 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0210 16:46:16.998374  123305 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0210 16:46:16.998383  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:16.998393  123305 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0210 16:46:16.998400  123305 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0210 16:46:16.998560  123305 wrap.go:47] GET /healthz: (318.95µs) 500
goroutine 28454 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00da5c700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00da5c700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003c1d1a0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00005d1e0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf600)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf600)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf600)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf600)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf500)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00c94c1b0, 0xc00cfaf500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00fe17ec0, 0xc004bfaf60, 0x5ec0200, 0xc00c94c1b0, 0xc00cfaf500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33442]
I0210 16:46:16.999880  123305 wrap.go:47] GET /api/v1/services: (1.342107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.003157  123305 wrap.go:47] GET /api/v1/services: (937.818µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.006031  123305 wrap.go:47] GET /api/v1/namespaces/default: (999.518µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.008228  123305 wrap.go:47] POST /api/v1/namespaces: (1.70665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.009607  123305 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (975.023µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.013278  123305 wrap.go:47] POST /api/v1/namespaces/default/services: (3.303344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.014699  123305 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (984.324µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.016759  123305 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.674875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.018292  123305 wrap.go:47] GET /api/v1/namespaces/default: (1.046299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.018379  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.059624ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33442]
I0210 16:46:17.019694  123305 wrap.go:47] GET /api/v1/services: (1.257594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:17.020079  123305 wrap.go:47] GET /api/v1/services: (978.107µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33448]
I0210 16:46:17.020176  123305 wrap.go:47] POST /api/v1/namespaces: (1.410831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.020198  123305 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.573437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33442]
I0210 16:46:17.021290  123305 wrap.go:47] GET /api/v1/namespaces/kube-public: (838.884µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.021991  123305 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.431046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:17.023189  123305 wrap.go:47] POST /api/v1/namespaces: (1.544706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.024342  123305 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (789.015µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.025976  123305 wrap.go:47] POST /api/v1/namespaces: (1.194491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:17.099268  123305 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0210 16:46:17.099298  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:17.099308  123305 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0210 16:46:17.099314  123305 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0210 16:46:17.099445  123305 wrap.go:47] GET /healthz: (300.288µs) 500
goroutine 28449 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0035c3f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0035c3f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003dc91a0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc000186ff0, 0xc00a4e4900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12a00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12a00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12a00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12a00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12900)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc000186ff0, 0xc012f12900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00fd6ec60, 0xc004bfaf60, 0x5ec0200, 0xc000186ff0, 0xc012f12900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33444]
I0210 16:46:17.199283  123305 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0210 16:46:17.199316  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:17.199326  123305 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0210 16:46:17.199334  123305 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0210 16:46:17.199465  123305 wrap.go:47] GET /healthz: (303.367µs) 500
goroutine 28514 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dbcc930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dbcc930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003d407e0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc012dd3700, 0xc00393cc00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b800)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b800)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b800)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b800)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b700)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc012dd3700, 0xc01278b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01a78de60, 0xc004bfaf60, 0x5ec0200, 0xc012dd3700, 0xc01278b700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33444]
I0210 16:46:17.299381  123305 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0210 16:46:17.299415  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:17.299426  123305 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0210 16:46:17.299433  123305 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0210 16:46:17.299598  123305 wrap.go:47] GET /healthz: (339.943µs) 500
goroutine 28502 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000411a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000411a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003df0b00, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ec600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea500)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea500)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea500)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea500)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea400)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc004aab3a8, 0xc0127ea400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01a122e40, 0xc004bfaf60, 0x5ec0200, 0xc004aab3a8, 0xc0127ea400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33444]
I0210 16:46:17.399313  123305 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0210 16:46:17.399347  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:17.399357  123305 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0210 16:46:17.399364  123305 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0210 16:46:17.399531  123305 wrap.go:47] GET /healthz: (316.524µs) 500
goroutine 28504 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc000411f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc000411f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003df0c60, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127eca80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea900)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea900)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea900)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea900)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea800)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc004aab3b0, 0xc0127ea800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01a122f00, 0xc004bfaf60, 0x5ec0200, 0xc004aab3b0, 0xc0127ea800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33444]
I0210 16:46:17.499339  123305 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0210 16:46:17.499370  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:17.499389  123305 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0210 16:46:17.499396  123305 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0210 16:46:17.499568  123305 wrap.go:47] GET /healthz: (339.691µs) 500
goroutine 28489 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dbe47e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dbe47e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003de0ae0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00cfe0438, 0xc012b7ec00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5e00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5e00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5e00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5e00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5d00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00cfe0438, 0xc012df5d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0130fa060, 0xc004bfaf60, 0x5ec0200, 0xc00cfe0438, 0xc012df5d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33444]
I0210 16:46:17.599360  123305 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0210 16:46:17.599438  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:17.599467  123305 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0210 16:46:17.599475  123305 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0210 16:46:17.599762  123305 wrap.go:47] GET /healthz: (536.029µs) 500
goroutine 28506 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dbea000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dbea000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003df0da0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127ecf00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eaf00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eaf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eaf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eaf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eaf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eaf00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eaf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eaf00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eaf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eaf00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eaf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eae00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc004aab3d8, 0xc0127eae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01a123080, 0xc004bfaf60, 0x5ec0200, 0xc004aab3d8, 0xc0127eae00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33444]
I0210 16:46:17.699902  123305 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0210 16:46:17.699937  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:17.699948  123305 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0210 16:46:17.699955  123305 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0210 16:46:17.700091  123305 wrap.go:47] GET /healthz: (332.364µs) 500
goroutine 28516 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dbccc40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dbccc40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003d40ee0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc012dd3728, 0xc00393db00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc012dd3728, 0xc01278be00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc012dd3728, 0xc01278be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc012dd3728, 0xc01278be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc012dd3728, 0xc01278be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc012dd3728, 0xc01278be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc012dd3728, 0xc01278be00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc012dd3728, 0xc01278be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc012dd3728, 0xc01278be00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc012dd3728, 0xc01278be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc012dd3728, 0xc01278be00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc012dd3728, 0xc01278be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc012dd3728, 0xc01278bd00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc012dd3728, 0xc01278bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0130f8120, 0xc004bfaf60, 0x5ec0200, 0xc012dd3728, 0xc01278bd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33444]
I0210 16:46:17.799261  123305 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0210 16:46:17.799297  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:17.799309  123305 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0210 16:46:17.799316  123305 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0210 16:46:17.799457  123305 wrap.go:47] GET /healthz: (310.875µs) 500
goroutine 28508 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dbea150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dbea150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003df0f00, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127ed380, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb300)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb300)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb300)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb300)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb200)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc004aab3e0, 0xc0127eb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01a123140, 0xc004bfaf60, 0x5ec0200, 0xc004aab3e0, 0xc0127eb200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33444]
I0210 16:46:17.869218  123305 clientconn.go:551] parsed scheme: ""
I0210 16:46:17.869264  123305 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0210 16:46:17.869324  123305 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0210 16:46:17.869391  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:17.869858  123305 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0210 16:46:17.869926  123305 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0210 16:46:17.900229  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:17.900258  123305 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0210 16:46:17.900265  123305 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0210 16:46:17.900417  123305 wrap.go:47] GET /healthz: (1.140193ms) 500
goroutine 28542 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dc840e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dc840e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003dc9da0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc000187068, 0xc012f8a2c0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc000187068, 0xc012f13100)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc000187068, 0xc012f13100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc000187068, 0xc012f13100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc000187068, 0xc012f13100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc000187068, 0xc012f13100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc000187068, 0xc012f13100)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc000187068, 0xc012f13100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc000187068, 0xc012f13100)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc000187068, 0xc012f13100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc000187068, 0xc012f13100)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc000187068, 0xc012f13100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc000187068, 0xc012f13000)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc000187068, 0xc012f13000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00fd6f980, 0xc004bfaf60, 0x5ec0200, 0xc000187068, 0xc012f13000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33444]
I0210 16:46:17.999187  123305 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.163686ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33476]
I0210 16:46:17.999200  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.379304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:17.999288  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.434491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33444]
I0210 16:46:18.001203  123305 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.593935ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.001222  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.334178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.001442  123305 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.873879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33476]
I0210 16:46:18.001606  123305 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0210 16:46:18.002653  123305 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (931.734µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33476]
I0210 16:46:18.002835  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.305203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.003054  123305 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.496181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.003154  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:18.003179  123305 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0210 16:46:18.003187  123305 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0210 16:46:18.003315  123305 wrap.go:47] GET /healthz: (3.974817ms) 500
goroutine 28562 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00dbe4af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00dbe4af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003de16c0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00cfe04d8, 0xc017ab06e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fef00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fef00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fef00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fef00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fee00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00cfe04d8, 0xc0131fee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0130fa900, 0xc004bfaf60, 0x5ec0200, 0xc00cfe04d8, 0xc0131fee00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33478]
I0210 16:46:18.003887  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (731.679µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33476]
I0210 16:46:18.004440  123305 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.189997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.004982  123305 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0210 16:46:18.004998  123305 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0210 16:46:18.005015  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (782.406µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33478]
I0210 16:46:18.006198  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (825.953µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.007228  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (765.79µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.008321  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (763.543µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.009362  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (700.566µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.011074  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.291166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.011357  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0210 16:46:18.012420  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (857.907µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.013943  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.161188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.014109  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0210 16:46:18.015438  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.13341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.017372  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.474992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.017925  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0210 16:46:18.019069  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (967.702µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.020643  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.1854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.020812  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0210 16:46:18.024374  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (3.425431ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.026006  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.307191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.026208  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0210 16:46:18.027273  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (905.723µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.028814  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.185171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.028979  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0210 16:46:18.029945  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (808.186µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.036929  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.037252  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0210 16:46:18.038832  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.241246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.042437  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.073181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.042803  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0210 16:46:18.044084  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.029424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.046434  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.775773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.046659  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0210 16:46:18.047602  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (792.127µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.049613  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.613607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.049815  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0210 16:46:18.050743  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (754.157µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.052890  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.779387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.054562  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0210 16:46:18.055612  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (848.213µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.058342  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.136249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.058570  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0210 16:46:18.059669  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (887.989µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.061811  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.586451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.062049  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0210 16:46:18.063005  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (781.765µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.065601  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.189981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.065769  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0210 16:46:18.070881  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (4.965546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.073034  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.650768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.073249  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0210 16:46:18.074443  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (907.974µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.076358  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.409533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.076714  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0210 16:46:18.077884  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (922.875µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.080041  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.707992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.080284  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0210 16:46:18.081237  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (781.085µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.083444  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.774531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.083709  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0210 16:46:18.084906  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (984.774µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.086895  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.600603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.087212  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0210 16:46:18.088156  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (785.214µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.089837  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.302129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.089979  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0210 16:46:18.090884  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (740.921µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.093035  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.765646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.093264  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0210 16:46:18.094369  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (926.768µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.096210  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.451486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.096415  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0210 16:46:18.097706  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (959.707µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.099818  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.550782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.101420  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:18.101438  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0210 16:46:18.101613  123305 wrap.go:47] GET /healthz: (2.565603ms) 500
goroutine 28679 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ead1180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ead1180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00cfa73e0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc0001051b8, 0xc002dad540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc0001051b8, 0xc01866a000)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc0001051b8, 0xc01866a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc0001051b8, 0xc01866a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc0001051b8, 0xc01866a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc0001051b8, 0xc01866a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc0001051b8, 0xc01866a000)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc0001051b8, 0xc01866a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc0001051b8, 0xc01866a000)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc0001051b8, 0xc01866a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc0001051b8, 0xc01866a000)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc0001051b8, 0xc01866a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc0001051b8, 0xc0133cbf00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc0001051b8, 0xc0133cbf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc018226900, 0xc004bfaf60, 0x5ec0200, 0xc0001051b8, 0xc0133cbf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33446]
I0210 16:46:18.102375  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (770.368µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.109824  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.929733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.110308  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0210 16:46:18.111576  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.035366ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.113207  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.230089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.113447  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0210 16:46:18.114554  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (896.921µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.116893  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.969158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.117135  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0210 16:46:18.118210  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (871.25µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.119850  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.235824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.120034  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0210 16:46:18.121233  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.023341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.123096  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.499034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.123344  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0210 16:46:18.124389  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (833.286µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.126300  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.448523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.126458  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0210 16:46:18.127376  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (706.439µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.128982  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.272059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.129458  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0210 16:46:18.130346  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (692.237µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.132018  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.33768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.132225  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0210 16:46:18.133299  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (920.544µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.135246  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.570371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.135453  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0210 16:46:18.138176  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.765247ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.140636  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.028603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.140846  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0210 16:46:18.142199  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.07997ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.144250  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.579363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.144598  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0210 16:46:18.145809  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.021488ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.148048  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.78917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.148307  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0210 16:46:18.149100  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (642.409µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.150620  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.165957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.150775  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0210 16:46:18.151697  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (762.989µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.153407  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.391256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.153659  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0210 16:46:18.154516  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (691.851µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.156146  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.323829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.156335  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0210 16:46:18.157304  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (780.747µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.159025  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.380948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.159212  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0210 16:46:18.160189  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (821.048µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.162157  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.512311ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.162359  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0210 16:46:18.163276  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (743.065µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.167460  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.740874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.168256  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0210 16:46:18.169190  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (697.043µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.170840  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.198249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.170993  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0210 16:46:18.171951  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (741.963µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.174074  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.766305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.174365  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0210 16:46:18.175451  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (828.991µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.177230  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.4346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.177439  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0210 16:46:18.178386  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (729.059µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.180006  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.25732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.180209  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0210 16:46:18.181158  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (824.91µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.183457  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.584121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.183650  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0210 16:46:18.184714  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (866.846µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.188845  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.636653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.189406  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0210 16:46:18.190372  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (814.57µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.192133  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.377425ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.192721  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0210 16:46:18.193900  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (980.539µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.195976  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.586778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.196204  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0210 16:46:18.197080  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (716.162µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.198616  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.167045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.198911  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0210 16:46:18.199728  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:18.199924  123305 wrap.go:47] GET /healthz: (912.54µs) 500
goroutine 28784 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ec201c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ec201c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d48f380, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d062af0, 0xc0128903c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d062af0, 0xc017786f00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d062af0, 0xc017786f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d062af0, 0xc017786f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d062af0, 0xc017786f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d062af0, 0xc017786f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d062af0, 0xc017786f00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d062af0, 0xc017786f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d062af0, 0xc017786f00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d062af0, 0xc017786f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d062af0, 0xc017786f00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d062af0, 0xc017786f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d062af0, 0xc017786e00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d062af0, 0xc017786e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc017781140, 0xc004bfaf60, 0x5ec0200, 0xc00d062af0, 0xc017786e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33480]
I0210 16:46:18.200144  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (987.99µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.201841  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.317222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.202026  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0210 16:46:18.219482  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.360753ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.241396  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.007693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.241810  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0210 16:46:18.259175  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.145542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.280194  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.123517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.280442  123305 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0210 16:46:18.299320  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.252475ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.300554  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:18.300721  123305 wrap.go:47] GET /healthz: (1.614288ms) 500
goroutine 28862 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ec38070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ec38070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d467c00, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d244bb0, 0xc0014c5180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4c00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4c00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4c00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4c00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4b00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d244bb0, 0xc0177c4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc017777c20, 0xc004bfaf60, 0x5ec0200, 0xc00d244bb0, 0xc0177c4b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33480]
I0210 16:46:18.320086  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.024652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.320342  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0210 16:46:18.339523  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.318406ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.360016  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.99316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.360322  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0210 16:46:18.379689  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.57364ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.399833  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:18.400018  123305 wrap.go:47] GET /healthz: (945.442µs) 500
goroutine 28837 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ec03260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ec03260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d4aa780, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d05c620, 0xc003800dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ef00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ef00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ef00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ef00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ee00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d05c620, 0xc01774ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01772d3e0, 0xc004bfaf60, 0x5ec0200, 0xc00d05c620, 0xc01774ee00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33446]
I0210 16:46:18.400420  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.326644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.400674  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0210 16:46:18.419528  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.469705ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.441433  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.25141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.441793  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0210 16:46:18.459689  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.43771ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.480396  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.30139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.480664  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0210 16:46:18.499361  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.25403ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.499906  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:18.500071  123305 wrap.go:47] GET /healthz: (865.606µs) 500
goroutine 28891 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ec5cb60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ec5cb60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d6e0ca0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d49a798, 0xc012890780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d49a798, 0xc017819200)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d49a798, 0xc017819200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d49a798, 0xc017819200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d49a798, 0xc017819200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d49a798, 0xc017819200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d49a798, 0xc017819200)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d49a798, 0xc017819200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d49a798, 0xc017819200)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d49a798, 0xc017819200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d49a798, 0xc017819200)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d49a798, 0xc017819200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d49a798, 0xc017819100)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d49a798, 0xc017819100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc017a38720, 0xc004bfaf60, 0x5ec0200, 0xc00d49a798, 0xc017819100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33446]
I0210 16:46:18.524400  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.459926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.524661  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0210 16:46:18.539333  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.300191ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.560123  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.058669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.560416  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0210 16:46:18.579328  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.283449ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.600059  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:18.600245  123305 wrap.go:47] GET /healthz: (1.197962ms) 500
goroutine 28839 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0181a4700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0181a4700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d698ee0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d05c760, 0xc0187783c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fd00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fd00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fd00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fd00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fc00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d05c760, 0xc01774fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01772dbc0, 0xc004bfaf60, 0x5ec0200, 0xc00d05c760, 0xc01774fc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33480]
I0210 16:46:18.600352  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.239143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.600652  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0210 16:46:18.619336  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.263691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.640346  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.273408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.640635  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0210 16:46:18.659455  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.375196ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.680299  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.159772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.680599  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0210 16:46:18.699684  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.575838ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.701361  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:18.701553  123305 wrap.go:47] GET /healthz: (1.027771ms) 500
goroutine 28920 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ec39e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ec39e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d738c60, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d244d98, 0xc0172b2000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0500)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0500)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0500)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0500)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0400)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d244d98, 0xc0172a0400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0177f4b40, 0xc004bfaf60, 0x5ec0200, 0xc00d244d98, 0xc0172a0400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33446]
I0210 16:46:18.720302  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.210408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.720581  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0210 16:46:18.739417  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.283502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.760717  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.269208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.761154  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0210 16:46:18.779512  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.404456ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.799936  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.837893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.800134  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:18.800204  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0210 16:46:18.800332  123305 wrap.go:47] GET /healthz: (1.296825ms) 500
goroutine 28937 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01822ea10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01822ea10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d7ea1a0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d063228, 0xc003801e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d063228, 0xc0172f8000)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d063228, 0xc0172f8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d063228, 0xc0172f8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d063228, 0xc0172f8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d063228, 0xc0172f8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d063228, 0xc0172f8000)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d063228, 0xc0172f8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d063228, 0xc0172f8000)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d063228, 0xc0172f8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d063228, 0xc0172f8000)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d063228, 0xc0172f8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d063228, 0xc017261f00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d063228, 0xc017261f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc017227bc0, 0xc004bfaf60, 0x5ec0200, 0xc00d063228, 0xc017261f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33480]
I0210 16:46:18.819541  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.42585ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.840088  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.044451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.840388  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0210 16:46:18.859336  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.248402ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.880339  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.202775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.880696  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0210 16:46:18.899957  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.801818ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:18.900074  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:18.900237  123305 wrap.go:47] GET /healthz: (1.132102ms) 500
goroutine 28948 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc018220bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc018220bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d854360, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d49aa78, 0xc018778780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3800)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3800)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3800)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3800)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3700)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d49aa78, 0xc0172a3700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0173421e0, 0xc004bfaf60, 0x5ec0200, 0xc00d49aa78, 0xc0172a3700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33446]
I0210 16:46:18.920254  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.205656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.920567  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0210 16:46:18.939523  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.385601ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.960274  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.154626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:18.960545  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0210 16:46:18.984799  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.562284ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.000376  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.316811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.000557  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:19.000611  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0210 16:46:19.000719  123305 wrap.go:47] GET /healthz: (957.862µs) 500
goroutine 28968 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01824d490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01824d490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d9545e0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc000187f48, 0xc018778dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7700)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7700)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7700)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7700)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7600)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc000187f48, 0xc0172f7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc017374a20, 0xc004bfaf60, 0x5ec0200, 0xc000187f48, 0xc0172f7600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33480]
I0210 16:46:19.019770  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.614895ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.040534  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.539797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.040746  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0210 16:46:19.059281  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.300494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.080292  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.236316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.080529  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0210 16:46:19.099373  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.092256ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.102863  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:19.103044  123305 wrap.go:47] GET /healthz: (2.872011ms) 500
goroutine 28980 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0181c1ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0181c1ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00d98ba00, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00cfe14a8, 0xc0172b2500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323d00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323d00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323d00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323d00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323c00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00cfe14a8, 0xc017323c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01735a900, 0xc004bfaf60, 0x5ec0200, 0xc00cfe14a8, 0xc017323c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33480]
I0210 16:46:19.120117  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.107916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.120395  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0210 16:46:19.139386  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.316078ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.160336  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.033935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.160614  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0210 16:46:19.179404  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.339237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.199979  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.955397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.200415  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0210 16:46:19.200588  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:19.200731  123305 wrap.go:47] GET /healthz: (1.463634ms) 500
goroutine 28997 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0183a6cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0183a6cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00da800c0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d245218, 0xc012890c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d245218, 0xc01735ff00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d245218, 0xc01735ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d245218, 0xc01735ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d245218, 0xc01735ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d245218, 0xc01735ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d245218, 0xc01735ff00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d245218, 0xc01735ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d245218, 0xc01735ff00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d245218, 0xc01735ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d245218, 0xc01735ff00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d245218, 0xc01735ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d245218, 0xc01735fe00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d245218, 0xc01735fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01735dec0, 0xc004bfaf60, 0x5ec0200, 0xc00d245218, 0xc01735fe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33446]
I0210 16:46:19.219594  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.479367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.240096  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.047912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.240476  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0210 16:46:19.259288  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.228668ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.280073  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.012395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.280351  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0210 16:46:19.299557  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.464286ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.300021  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:19.300191  123305 wrap.go:47] GET /healthz: (1.034865ms) 500
goroutine 29033 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01847a0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01847a0e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00da79700, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d05ca50, 0xc0045efcc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d05ca50, 0xc017448000)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d05ca50, 0xc017448000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d05ca50, 0xc017448000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d05ca50, 0xc017448000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d05ca50, 0xc017448000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d05ca50, 0xc017448000)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d05ca50, 0xc017448000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d05ca50, 0xc017448000)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d05ca50, 0xc017448000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d05ca50, 0xc017448000)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d05ca50, 0xc017448000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d05ca50, 0xc0173c3f00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d05ca50, 0xc0173c3f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0173d9980, 0xc004bfaf60, 0x5ec0200, 0xc00d05ca50, 0xc0173c3f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33480]
I0210 16:46:19.320856  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.036919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.321195  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0210 16:46:19.339369  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.301626ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.367131  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (9.016897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.367412  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0210 16:46:19.379232  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.124494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.399926  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:19.400076  123305 wrap.go:47] GET /healthz: (988.959µs) 500
goroutine 29065 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01855c850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01855c850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dabb820, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d49ae38, 0xc012891180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c800)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c800)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c800)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c800)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c700)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d49ae38, 0xc01748c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc017343ce0, 0xc004bfaf60, 0x5ec0200, 0xc00d49ae38, 0xc01748c700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33446]
I0210 16:46:19.400335  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.242783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.400596  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0210 16:46:19.419502  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.416025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.440892  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.838991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.441160  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0210 16:46:19.459418  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.37431ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.484050  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.920366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.484384  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0210 16:46:19.499789  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.661053ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.499942  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:19.500109  123305 wrap.go:47] GET /healthz: (1.005996ms) 500
goroutine 29084 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc018443650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc018443650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dae58a0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00cfe17e8, 0xc012891540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6400)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6400)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6400)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6400)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6300)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00cfe17e8, 0xc0174d6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01735be00, 0xc004bfaf60, 0x5ec0200, 0xc00cfe17e8, 0xc0174d6300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33446]
I0210 16:46:19.520274  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.199047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.520544  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0210 16:46:19.539407  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.372311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.560354  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.246749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.560648  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0210 16:46:19.579692  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.32497ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.600258  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.09329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.600297  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:19.600472  123305 wrap.go:47] GET /healthz: (1.352986ms) 500
goroutine 29100 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01861c070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01861c070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dbd1d40, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d05cd28, 0xc017316500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504400)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504400)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504400)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504400)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504300)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d05cd28, 0xc017504300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc017476c00, 0xc004bfaf60, 0x5ec0200, 0xc00d05cd28, 0xc017504300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33480]
I0210 16:46:19.600568  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0210 16:46:19.619554  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.38148ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.640413  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.278885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.640788  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0210 16:46:19.659660  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.544817ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.680281  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.205028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.680599  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0210 16:46:19.699478  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.379899ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:19.700094  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:19.700290  123305 wrap.go:47] GET /healthz: (1.172604ms) 500
goroutine 29088 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0186d4150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0186d4150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dbfba40, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00cfe1968, 0xc01c810140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7f00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7f00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7f00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7f00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7e00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00cfe1968, 0xc0174d7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0174daf60, 0xc004bfaf60, 0x5ec0200, 0xc00cfe1968, 0xc0174d7e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33446]
I0210 16:46:19.720197  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.057991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.720556  123305 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0210 16:46:19.739573  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.485047ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.741629  123305 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.50281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.760273  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.203117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.760591  123305 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0210 16:46:19.780110  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.96325ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.782112  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.417643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.800732  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:19.800904  123305 wrap.go:47] GET /healthz: (1.529803ms) 500
goroutine 29157 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01861d5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01861d5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00dffe9a0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d05cf08, 0xc012891a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856200)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856200)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856200)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856200)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856100)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d05cf08, 0xc01c856100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc017477c80, 0xc004bfaf60, 0x5ec0200, 0xc00d05cf08, 0xc01c856100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33480]
I0210 16:46:19.801117  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.068872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.801420  123305 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0210 16:46:19.819455  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.324191ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.821429  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.410876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.840199  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.082604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.840443  123305 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0210 16:46:19.859594  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.440819ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.861570  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.432528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.880339  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.135279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.880666  123305 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0210 16:46:19.899622  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.415468ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.900059  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:19.900258  123305 wrap.go:47] GET /healthz: (1.029579ms) 500
goroutine 29145 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0186d5a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0186d5a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e050220, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00cfe1c00, 0xc012891e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2300)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2300)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2300)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2300)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2200)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00cfe1c00, 0xc01c8a2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c8981e0, 0xc004bfaf60, 0x5ec0200, 0xc00cfe1c00, 0xc01c8a2200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33480]
I0210 16:46:19.901692  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.397247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.920188  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.09514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.920508  123305 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0210 16:46:19.941688  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.568963ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.943405  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.271776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.960393  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.319671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.960676  123305 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0210 16:46:19.983543  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.665933ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:19.986037  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.396955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:20.000008  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:20.000236  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.182776ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:20.000241  123305 wrap.go:47] GET /healthz: (1.14117ms) 500
goroutine 28906 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ec6afc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ec6afc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e1d48c0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d9ae140, 0xc01c810640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805c00)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805c00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805c00)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805c00)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805b00)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d9ae140, 0xc017805b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0177d7800, 0xc004bfaf60, 0x5ec0200, 0xc00d9ae140, 0xc017805b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33480]
I0210 16:46:20.000457  123305 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0210 16:46:20.019403  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.315831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.021531  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.545305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.041132  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.309108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.041395  123305 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0210 16:46:20.059738  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.627424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.062215  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.967644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.080320  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.224076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.080627  123305 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0210 16:46:20.099432  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.236656ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.099924  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:20.100144  123305 wrap.go:47] GET /healthz: (1.047233ms) 500
goroutine 28910 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00ec6b650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00ec6b650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e1d56e0, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d9ae188, 0xc0039feb40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e600)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e600)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e600)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e600)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e500)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d9ae188, 0xc01c96e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0177d7d40, 0xc004bfaf60, 0x5ec0200, 0xc00d9ae188, 0xc01c96e500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33446]
I0210 16:46:20.101321  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.227185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.120516  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.437564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.120742  123305 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0210 16:46:20.139558  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.463173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.141767  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.730394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.160296  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.179069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.160614  123305 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0210 16:46:20.180060  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (2.030694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.182284  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.731056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.200104  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.962848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.200326  123305 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0210 16:46:20.200755  123305 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0210 16:46:20.200919  123305 wrap.go:47] GET /healthz: (1.277792ms) 500
goroutine 29221 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc018769570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc018769570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00e59f900, 0x1f4)
net/http.Error(0x7f1e0bf50c00, 0xc00d49b510, 0xc01c8b8280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8400)
net/http.HandlerFunc.ServeHTTP(0xc003c1c9e0, 0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc017e25d40, 0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x41c2399, 0xe, 0xc00e049290, 0xc00fe2c380, 0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8400)
net/http.HandlerFunc.ServeHTTP(0xc0049c5400, 0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8400)
net/http.HandlerFunc.ServeHTTP(0xc003f968a0, 0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8400)
net/http.HandlerFunc.ServeHTTP(0xc0049c5440, 0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8300)
net/http.HandlerFunc.ServeHTTP(0xc002de3ef0, 0x7f1e0bf50c00, 0xc00d49b510, 0xc01c9d8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01c8a9080, 0xc004bfaf60, 0x5ec0200, 0xc00d49b510, 0xc01c9d8300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:33446]
I0210 16:46:20.219060  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (958.075µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:20.221177  123305 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.589539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:20.240737  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.643197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:20.241319  123305 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0210 16:46:20.259295  123305 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.198951ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:20.261330  123305 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.401374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:20.280388  123305 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.32382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:20.280664  123305 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0210 16:46:20.300664  123305 wrap.go:47] GET /healthz: (1.382219ms) 200 [Go-http-client/1.1 127.0.0.1:33446]
W0210 16:46:20.301408  123305 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0210 16:46:20.301457  123305 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0210 16:46:20.301532  123305 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0210 16:46:20.301551  123305 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0210 16:46:20.301563  123305 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0210 16:46:20.301581  123305 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0210 16:46:20.301596  123305 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0210 16:46:20.301610  123305 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0210 16:46:20.301629  123305 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0210 16:46:20.301645  123305 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0210 16:46:20.301705  123305 factory.go:331] Creating scheduler from algorithm provider 'DefaultProvider'
I0210 16:46:20.301721  123305 factory.go:412] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} NoVolumeZoneConflict:{} MaxEBSVolumeCount:{} MaxCSIVolumeCountPred:{} MatchInterPodAffinity:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} GeneralPredicates:{} CheckVolumeBinding:{} MaxGCEPDVolumeCount:{} CheckNodeDiskPressure:{} MaxAzureDiskVolumeCount:{} NoDiskConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[TaintTolerationPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{}]'
I0210 16:46:20.301957  123305 controller_utils.go:1021] Waiting for caches to sync for scheduler controller
I0210 16:46:20.302325  123305 reflector.go:132] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:210
I0210 16:46:20.302346  123305 reflector.go:170] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:210
I0210 16:46:20.303234  123305 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (613.955µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33446]
I0210 16:46:20.303934  123305 get.go:251] Starting watch for /api/v1/pods, rv=19320 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=9m59s
I0210 16:46:20.402136  123305 shared_informer.go:123] caches populated
I0210 16:46:20.402181  123305 controller_utils.go:1028] Caches are synced for scheduler controller
I0210 16:46:20.402665  123305 reflector.go:132] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.402685  123305 reflector.go:170] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.403189  123305 reflector.go:132] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.403201  123305 reflector.go:170] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.403637  123305 reflector.go:132] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.403647  123305 reflector.go:170] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.403704  123305 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (688.283µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.403997  123305 reflector.go:132] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.404009  123305 reflector.go:170] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.405061  123305 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (993.165µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33802]
I0210 16:46:20.405285  123305 reflector.go:132] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.405302  123305 reflector.go:170] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.405312  123305 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=19320 labels= fields= timeout=6m52s
I0210 16:46:20.405527  123305 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (549.13µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33480]
I0210 16:46:20.405571  123305 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (416.899µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33800]
I0210 16:46:20.405795  123305 reflector.go:132] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.405813  123305 reflector.go:170] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.406264  123305 reflector.go:132] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.406277  123305 reflector.go:170] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.406605  123305 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (671.049µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0210 16:46:20.406857  123305 get.go:251] Starting watch for /api/v1/services, rv=19331 labels= fields= timeout=7m18s
I0210 16:46:20.407132  123305 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=19320 labels= fields= timeout=8m17s
I0210 16:46:20.407386  123305 reflector.go:132] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.407398  123305 reflector.go:170] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.408108  123305 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (408.2µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33806]
I0210 16:46:20.408305  123305 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=19322 labels= fields= timeout=6m7s
I0210 16:46:20.408727  123305 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=19323 labels= fields= timeout=7m59s
I0210 16:46:20.408788  123305 get.go:251] Starting watch for /api/v1/nodes, rv=19320 labels= fields= timeout=9m59s
I0210 16:46:20.409073  123305 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (548.09µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33814]
I0210 16:46:20.409242  123305 reflector.go:132] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.409259  123305 reflector.go:170] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:132
I0210 16:46:20.409278  123305 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (1.388235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33808]
I0210 16:46:20.409807  123305 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=19320 labels= fields= timeout=6m55s
I0210 16:46:20.410045  123305 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (600.025µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33814]
I0210 16:46:20.410484  123305 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=19322 labels= fields= timeout=7m45s
I0210 16:46:20.410742  123305 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=19323 labels= fields= timeout=9m43s
I0210 16:46:20.502570  123305 shared_informer.go:123] caches populated
I0210 16:46:20.602820  123305 shared_informer.go:123] caches populated
I0210 16:46:20.703077  123305 shared_informer.go:123] caches populated
I0210 16:46:20.803301  123305 shared_informer.go:123] caches populated
I0210 16:46:20.903632  123305 shared_informer.go:123] caches populated
I0210 16:46:21.003857  123305 shared_informer.go:123] caches populated
I0210 16:46:21.104085  123305 shared_informer.go:123] caches populated
I0210 16:46:21.204476  123305 shared_informer.go:123] caches populated
I0210 16:46:21.304750  123305 shared_informer.go:123] caches populated
I0210 16:46:21.404964  123305 shared_informer.go:123] caches populated
I0210 16:46:21.405138  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:21.406326  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:21.407535  123305 wrap.go:47] POST /api/v1/nodes: (2.137602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33814]
I0210 16:46:21.408535  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:21.409916  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.922805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33814]
I0210 16:46:21.410091  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0
I0210 16:46:21.410106  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0
I0210 16:46:21.410273  123305 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0", node "node1"
I0210 16:46:21.410290  123305 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0210 16:46:21.410328  123305 factory.go:733] Attempting to bind rpod-0 to node1
I0210 16:46:21.411441  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:21.411892  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:21.413196  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.873189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33814]
I0210 16:46:21.413465  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0/binding: (2.69805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.413598  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1
I0210 16:46:21.413610  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1
I0210 16:46:21.413666  123305 scheduler.go:571] pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0210 16:46:21.413698  123305 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1", node "node1"
I0210 16:46:21.413709  123305 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0210 16:46:21.413750  123305 factory.go:733] Attempting to bind rpod-1 to node1
I0210 16:46:21.415515  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.587333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33814]
I0210 16:46:21.416893  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1/binding: (2.863374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.417106  123305 scheduler.go:571] pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0210 16:46:21.418863  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.498152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.515870  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0: (1.633958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.618424  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (1.693528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.618788  123305 preemption_test.go:561] Creating the preemptor pod...
I0210 16:46:21.621430  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.340838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.621515  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:21.621543  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:21.621671  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.621686  123305 preemption_test.go:567] Creating additional pods...
I0210 16:46:21.621744  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.625158  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.828893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I0210 16:46:21.625583  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.688047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.625969  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.638784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33838]
I0210 16:46:21.626350  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/status: (3.782627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33814]
I0210 16:46:21.627974  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.743762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.628553  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.748039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33838]
I0210 16:46:21.629099  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.629896  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.510777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.631329  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/status: (1.800402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33838]
I0210 16:46:21.632584  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.287953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.635580  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.76733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.638680  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.549982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.639136  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (7.17568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33838]
I0210 16:46:21.639410  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:21.639424  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:21.639590  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.639650  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.641130  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.482542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.641831  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.631085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I0210 16:46:21.642607  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.299053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.643194  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/status: (1.612125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.644443  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/preemptor-pod.15820e83f15bee8b: (2.422306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33844]
I0210 16:46:21.644964  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.251867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33834]
I0210 16:46:21.645297  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:21.645343  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:21.645528  123305 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod", node "node1"
I0210 16:46:21.645571  123305 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0210 16:46:21.645894  123305 factory.go:733] Attempting to bind preemptor-pod to node1
I0210 16:46:21.645933  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-6
I0210 16:46:21.645694  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.479113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.645957  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-6
I0210 16:46:21.646072  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.646190  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.649071  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (2.286967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.649616  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/binding: (2.547339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33844]
I0210 16:46:21.649936  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.179033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33846]
I0210 16:46:21.650299  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6/status: (3.576199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33836]
I0210 16:46:21.650646  123305 scheduler.go:571] pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0210 16:46:21.651515  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.103796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.652673  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (1.590554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33844]
I0210 16:46:21.653005  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.653085  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.23722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.653546  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7
I0210 16:46:21.653592  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.211692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33848]
I0210 16:46:21.653606  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7
I0210 16:46:21.653752  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.653832  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.657418  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.877122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33844]
I0210 16:46:21.657712  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7/status: (3.142253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.657738  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.25903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I0210 16:46:21.659517  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (1.496024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33850]
I0210 16:46:21.659640  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.837629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33844]
I0210 16:46:21.660252  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (1.566627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.660660  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.661039  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8
I0210 16:46:21.661061  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8
I0210 16:46:21.661159  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.661232  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.662609  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.484148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33850]
I0210 16:46:21.663599  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.720004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33854]
I0210 16:46:21.664273  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8/status: (2.659211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.665019  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (2.282007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I0210 16:46:21.665544  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.428107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33850]
I0210 16:46:21.666037  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (1.194102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.666285  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.666482  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11
I0210 16:46:21.666535  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11
I0210 16:46:21.666732  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.666804  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.667640  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.668079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I0210 16:46:21.668715  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.659252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33854]
I0210 16:46:21.670043  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11/status: (3.017541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.671352  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.312974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I0210 16:46:21.671604  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.884097ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33854]
I0210 16:46:21.672310  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.303135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.672595  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.672969  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13
I0210 16:46:21.672991  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13
I0210 16:46:21.673094  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.673216  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.674280  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.904876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I0210 16:46:21.676650  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.937503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I0210 16:46:21.676708  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13/status: (2.974624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33856]
I0210 16:46:21.676835  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (3.302067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.678638  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.06668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33858]
I0210 16:46:21.680696  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (1.236363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33856]
I0210 16:46:21.681009  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.681268  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.840223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33852]
I0210 16:46:21.681569  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15
I0210 16:46:21.681592  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15
I0210 16:46:21.681778  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.681826  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.683831  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.185704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33860]
I0210 16:46:21.684278  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.511322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I0210 16:46:21.684398  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15/status: (1.837439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33858]
I0210 16:46:21.684403  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.563472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33842]
I0210 16:46:21.686697  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.670922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33860]
I0210 16:46:21.686725  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.750299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I0210 16:46:21.687021  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.687213  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13
I0210 16:46:21.687231  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13
I0210 16:46:21.687341  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.687435  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.688975  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.689547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I0210 16:46:21.689882  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13/status: (2.207704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33860]
I0210 16:46:21.690995  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-13.15820e83f46d9c5a: (2.65623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33866]
I0210 16:46:21.691411  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (1.12064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33860]
I0210 16:46:21.691648  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.692325  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:21.692406  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:21.692481  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.478399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33862]
I0210 16:46:21.692686  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.692741  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.694824  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.874515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33860]
I0210 16:46:21.696835  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (2.120388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I0210 16:46:21.697961  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20/status: (4.715935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33866]
I0210 16:46:21.698599  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (5.128015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33870]
I0210 16:46:21.698812  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.507255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33860]
I0210 16:46:21.699541  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (1.169668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33866]
I0210 16:46:21.699811  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.699959  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:21.699982  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:21.700103  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.700156  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.700643  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (11.245936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33864]
I0210 16:46:21.701726  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.174947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I0210 16:46:21.702594  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.766804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0210 16:46:21.702610  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.171356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33870]
I0210 16:46:21.704631  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21/status: (4.037796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33866]
I0210 16:46:21.705014  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.016524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I0210 16:46:21.706246  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.117991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0210 16:46:21.707088  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.707274  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:21.707290  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:21.707388  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.707440  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.707573  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.162759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I0210 16:46:21.710200  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.364105ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I0210 16:46:21.710986  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24/status: (3.134336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33864]
I0210 16:46:21.711192  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (3.53455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0210 16:46:21.713464  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.049465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33874]
I0210 16:46:21.716773  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.558092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33874]
I0210 16:46:21.716984  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (4.928357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0210 16:46:21.717305  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.717518  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:21.717537  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:21.717672  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.717810  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.719353  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.22699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33874]
I0210 16:46:21.722223  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.565375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33876]
I0210 16:46:21.723329  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.429029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33874]
I0210 16:46:21.723410  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26/status: (4.967832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0210 16:46:21.723814  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (5.541627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I0210 16:46:21.726096  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.233086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33874]
I0210 16:46:21.727631  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (3.315983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0210 16:46:21.728430  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.751174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33868]
I0210 16:46:21.728472  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.728641  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:21.728683  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:21.728783  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.728859  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.730982  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.57547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0210 16:46:21.731642  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.098822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33882]
I0210 16:46:21.731760  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.625395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33872]
I0210 16:46:21.731814  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29/status: (2.444489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33876]
I0210 16:46:21.733953  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.369839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33876]
I0210 16:46:21.734384  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.735277  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:21.735314  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:21.735585  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.735682  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.735750  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.523999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33882]
I0210 16:46:21.737042  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.1141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0210 16:46:21.737797  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.363484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33882]
I0210 16:46:21.738278  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.902823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33884]
I0210 16:46:21.738553  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33/status: (2.489466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33876]
I0210 16:46:21.740076  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.303052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33882]
I0210 16:46:21.740142  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.182722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33876]
I0210 16:46:21.740775  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.741160  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:21.741189  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:21.741265  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.741306  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.743344  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.666684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33882]
I0210 16:46:21.743417  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.442018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33886]
I0210 16:46:21.744039  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34/status: (1.775347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33876]
I0210 16:46:21.745638  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (3.702179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0210 16:46:21.746347  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.278792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33886]
I0210 16:46:21.746709  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.747388  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.36733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33882]
I0210 16:46:21.747561  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:21.747599  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:21.748598  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.748703  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.750708  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.902438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33886]
I0210 16:46:21.750911  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.519373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33888]
I0210 16:46:21.752104  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37/status: (2.331961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33890]
I0210 16:46:21.752709  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (2.633701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0210 16:46:21.754970  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.070685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33888]
I0210 16:46:21.755354  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (2.294692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33890]
I0210 16:46:21.755770  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.755964  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:21.755981  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:21.756187  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.756729  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.757242  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.746178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0210 16:46:21.759676  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.521828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33892]
I0210 16:46:21.760110  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34/status: (2.837282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33886]
I0210 16:46:21.760388  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (3.44526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33890]
I0210 16:46:21.761008  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-34.15820e83f87ca56f: (3.364893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0210 16:46:21.762193  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.957789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33892]
I0210 16:46:21.762834  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (2.004599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33890]
I0210 16:46:21.763181  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.763322  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:21.763337  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:21.763400  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.763444  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.765245  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.679755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0210 16:46:21.766354  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41/status: (2.688392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33890]
I0210 16:46:21.767356  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (3.347229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33886]
I0210 16:46:21.767443  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.550269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33894]
I0210 16:46:21.768373  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.293325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33890]
I0210 16:46:21.768582  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.278247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33880]
I0210 16:46:21.768674  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.768907  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:21.768923  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:21.769063  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.769158  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.770553  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.469893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33894]
I0210 16:46:21.770910  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.554809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33886]
I0210 16:46:21.772808  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.818384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33894]
I0210 16:46:21.773121  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44/status: (3.327304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33896]
I0210 16:46:21.773780  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.574517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33886]
I0210 16:46:21.775244  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.614625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33894]
I0210 16:46:21.775515  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.775643  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:21.775796  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.487911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33886]
I0210 16:46:21.775810  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:21.775976  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.776056  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.777443  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.076089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33894]
I0210 16:46:21.778090  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46/status: (1.696988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33898]
I0210 16:46:21.778424  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.800967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0210 16:46:21.779987  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.454576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33898]
I0210 16:46:21.780832  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.780988  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:21.781022  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:21.781118  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.781177  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.782619  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.185025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33894]
I0210 16:46:21.783600  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48/status: (2.210548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0210 16:46:21.784544  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.74604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33902]
I0210 16:46:21.785581  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.208097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33894]
I0210 16:46:21.786583  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.786799  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:21.786820  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:21.787022  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.787073  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.788973  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.608781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0210 16:46:21.789547  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46/status: (2.120235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33902]
I0210 16:46:21.790435  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-46.15820e83fa8ecd06: (2.658855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33904]
I0210 16:46:21.791252  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.214219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33902]
I0210 16:46:21.791574  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.791757  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:21.791786  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:21.791889  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.792024  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.793250  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.039462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33904]
I0210 16:46:21.794396  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48/status: (2.047983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0210 16:46:21.795920  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (995.094µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0210 16:46:21.796258  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.796544  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:21.796597  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:21.796711  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.796757  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.796785  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-48.15820e83fadcd4ea: (3.16566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33904]
I0210 16:46:21.798597  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (1.553676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33904]
I0210 16:46:21.799422  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.126999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33906]
I0210 16:46:21.800090  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49/status: (3.112219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0210 16:46:21.801843  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (1.361468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33900]
I0210 16:46:21.802174  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.802363  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:21.802381  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:21.802523  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.802573  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.804280  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.216753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33904]
I0210 16:46:21.804683  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44/status: (1.862253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33906]
I0210 16:46:21.805803  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-44.15820e83fa24f80a: (2.143052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33908]
I0210 16:46:21.806195  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.121939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33906]
I0210 16:46:21.806451  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.806690  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:21.806708  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:21.806813  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.806864  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.808340  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (1.203845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33904]
I0210 16:46:21.810456  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-49.15820e83fbcac041: (2.798308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33910]
I0210 16:46:21.810562  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49/status: (3.444497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33908]
I0210 16:46:21.813459  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (2.163898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33910]
I0210 16:46:21.814389  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.814544  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:21.814561  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:21.814654  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.814702  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.816513  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.12393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33904]
I0210 16:46:21.816993  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47/status: (1.997122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33910]
I0210 16:46:21.817574  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.64993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33912]
I0210 16:46:21.818628  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.157416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33910]
I0210 16:46:21.818860  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.818998  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:21.819018  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:21.819118  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.819194  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.821274  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.798969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33912]
I0210 16:46:21.822692  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-41.15820e83f9ce61dc: (2.778067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33914]
I0210 16:46:21.822904  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41/status: (3.431772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33910]
I0210 16:46:21.824711  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.310168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33914]
I0210 16:46:21.825005  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.825238  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:21.825273  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:21.825391  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.825465  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.827570  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.777061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33914]
I0210 16:46:21.828643  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47/status: (2.868342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33912]
I0210 16:46:21.833211  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-47.15820e83fcdc8ec4: (6.902476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33916]
I0210 16:46:21.840942  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (11.861961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33912]
I0210 16:46:21.841397  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.841620  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:21.841640  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:21.841782  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.841846  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.843627  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.476974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33914]
I0210 16:46:21.844678  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.130682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33918]
I0210 16:46:21.844741  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45/status: (2.575603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33916]
I0210 16:46:21.846426  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.237929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33918]
I0210 16:46:21.846706  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.846882  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:21.846897  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:21.847044  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.847098  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.848598  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.248252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33914]
I0210 16:46:21.850454  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43/status: (3.070632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33918]
I0210 16:46:21.850626  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.928654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33920]
I0210 16:46:21.852080  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.164092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33918]
I0210 16:46:21.852322  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.852513  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:21.852574  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:21.852702  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.852758  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.854893  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.81525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33918]
I0210 16:46:21.854976  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45/status: (1.786951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33914]
I0210 16:46:21.856249  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-45.15820e83fe7abbc6: (2.729516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I0210 16:46:21.856741  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.210549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33914]
I0210 16:46:21.857041  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.857265  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:21.857288  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:21.857397  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.857463  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.858810  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.092693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I0210 16:46:21.860177  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43/status: (2.438716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33918]
I0210 16:46:21.861630  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-43.15820e83fecae52d: (3.339571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33924]
I0210 16:46:21.863140  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.467986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33918]
I0210 16:46:21.863513  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.863678  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:21.863695  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:21.863814  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.863863  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.866256  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.803288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I0210 16:46:21.867617  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.672188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33926]
I0210 16:46:21.867701  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42/status: (3.149924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33924]
I0210 16:46:21.870303  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.448142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33924]
I0210 16:46:21.870556  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.870716  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:21.870732  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:21.870847  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.870904  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.873749  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37/status: (2.103167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I0210 16:46:21.874412  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (2.833489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33924]
I0210 16:46:21.875237  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.132653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I0210 16:46:21.875798  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.875959  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:21.875975  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:21.875983  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-37.15820e83f8ed4935: (3.326786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33928]
I0210 16:46:21.876088  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.876138  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.878296  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.989542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I0210 16:46:21.878479  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.820985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I0210 16:46:21.878480  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42/status: (2.099918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33924]
I0210 16:46:21.880247  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.23578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I0210 16:46:21.880462  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.880664  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-42.15820e83ffcab60a: (2.582848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33932]
I0210 16:46:21.880856  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:21.880877  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:21.880977  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.881105  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.883192  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.793447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I0210 16:46:21.884216  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.689336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0210 16:46:21.885747  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40/status: (4.304731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33922]
I0210 16:46:21.887469  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.224842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0210 16:46:21.887731  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.887944  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:21.887966  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:21.888110  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.888194  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.890132  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.695168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I0210 16:46:21.891880  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39/status: (3.425804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0210 16:46:21.892002  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.055359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33936]
I0210 16:46:21.893827  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.52502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0210 16:46:21.894275  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.894434  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:21.894450  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:21.894569  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.894627  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.899350  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (3.898932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I0210 16:46:21.899417  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40/status: (3.581524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0210 16:46:21.901563  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-40.15820e8400d1cc67: (4.254999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33940]
I0210 16:46:21.901601  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.729391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I0210 16:46:21.901932  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.902116  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:21.902161  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:21.902299  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.902363  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.904697  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.994792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0210 16:46:21.907540  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39/status: (4.896165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I0210 16:46:21.908399  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-39.15820e84013d9249: (5.297427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33944]
I0210 16:46:21.909819  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.757498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33930]
I0210 16:46:21.910133  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.910306  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:21.910329  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:21.910409  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.910457  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.912418  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.089311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0210 16:46:21.912940  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38/status: (2.229673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33944]
I0210 16:46:21.914342  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.493897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33946]
I0210 16:46:21.915485  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (2.100433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33944]
I0210 16:46:21.915876  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.916454  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:21.916477  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:21.917304  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.917420  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.918950  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.138513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33946]
I0210 16:46:21.920301  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.224884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0210 16:46:21.921141  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36/status: (1.850185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33946]
I0210 16:46:21.922907  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.25724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33946]
I0210 16:46:21.923298  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.923412  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:21.923432  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:21.923539  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.923639  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.926429  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.492981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0210 16:46:21.927719  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33/status: (3.808577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33946]
I0210 16:46:21.927991  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-33.15820e83f8269176: (2.564151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0210 16:46:21.929946  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.681074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33946]
I0210 16:46:21.930390  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.930650  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:21.930711  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:21.930869  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.932049  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.933645  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.359789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0210 16:46:21.934818  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36/status: (2.422636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0210 16:46:21.937110  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.698112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0210 16:46:21.937337  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-36.15820e8402fba6a5: (2.552432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33950]
I0210 16:46:21.937431  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.937624  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:21.937660  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:21.937832  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.937939  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.941243  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (2.184865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0210 16:46:21.941268  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.139439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33952]
I0210 16:46:21.941634  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35/status: (3.22497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33934]
I0210 16:46:21.943374  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.272822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33952]
I0210 16:46:21.943720  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.943937  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:21.943951  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:21.944083  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.944156  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.945739  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.108082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0210 16:46:21.946182  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29/status: (1.695723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33952]
I0210 16:46:21.947430  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-29.15820e83f7be9e9f: (2.578187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33958]
I0210 16:46:21.947910  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.323962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33952]
I0210 16:46:21.948196  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.948343  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:21.948376  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:21.948515  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.948561  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.950139  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.148493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0210 16:46:21.950909  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35/status: (2.108291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33958]
I0210 16:46:21.951929  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-35.15820e840434fad4: (2.491742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0210 16:46:21.952330  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (990.59µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33958]
I0210 16:46:21.952714  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.952879  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:21.952894  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:21.953001  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.953051  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.955720  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32/status: (2.253927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0210 16:46:21.957871  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (4.338631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0210 16:46:21.958606  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (4.158042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33968]
I0210 16:46:21.958830  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.168342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33962]
I0210 16:46:21.960785  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.961023  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:21.961062  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:21.961176  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.961244  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.964672  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26/status: (2.885648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0210 16:46:21.964890  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (3.328005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33968]
I0210 16:46:21.965559  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-26.15820e83f714d140: (3.062238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33976]
I0210 16:46:21.966448  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (1.362343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33948]
I0210 16:46:21.966850  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.970750  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:21.971575  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:21.971783  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.971840  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.974425  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32/status: (2.212379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33968]
I0210 16:46:21.977981  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (3.125413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33976]
I0210 16:46:21.978387  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (3.180594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33968]
I0210 16:46:21.978910  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.979122  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:21.979197  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:21.979212  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-32.15820e84051b8fa9: (4.649753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:21.979338  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.979399  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.980287  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.411881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33968]
I0210 16:46:21.981143  123305 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0210 16:46:21.981705  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.315123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:21.984237  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (2.855912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33968]
I0210 16:46:21.984833  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31/status: (4.383997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33990]
I0210 16:46:21.985841  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (1.231944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33968]
I0210 16:46:21.986022  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (5.741234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33976]
I0210 16:46:21.986264  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (997.863µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33990]
I0210 16:46:21.986685  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.986936  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:21.986956  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:21.987098  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.987142  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.987554  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (1.179676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33968]
I0210 16:46:21.988995  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.282401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:21.989407  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.338039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33968]
I0210 16:46:21.989513  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (1.192894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34004]
I0210 16:46:21.991108  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (1.196507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:21.992871  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30/status: (4.868833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33976]
I0210 16:46:21.993012  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (1.519145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:21.994883  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.292449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34002]
I0210 16:46:21.995058  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (1.682674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:21.995214  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:21.995909  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:21.995929  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:21.996059  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:21.996106  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:21.996931  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (1.261663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:21.998571  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.106745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:21.998970  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (1.523382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34008]
I0210 16:46:21.999873  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-31.15820e8406ad7e63: (3.027431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34006]
I0210 16:46:22.000833  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31/status: (4.271352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34002]
I0210 16:46:22.001360  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.386196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34008]
I0210 16:46:22.002557  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.198078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34006]
I0210 16:46:22.002927  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.002953  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (1.18747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34008]
I0210 16:46:22.003126  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:22.003144  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:22.003794  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.003882  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.005293  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.143092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.005559  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.441018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34006]
I0210 16:46:22.008077  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-30.15820e840723d03f: (3.381129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.008194  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (2.137848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34006]
I0210 16:46:22.008431  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30/status: (3.425127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34010]
I0210 16:46:22.010127  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.284168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.010127  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (1.352044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.010408  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.010566  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:22.010582  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:22.010655  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.012916  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.33862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.015951  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (5.337199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.017603  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.018043  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.020526  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.653516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.022049  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28/status: (2.958008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.022103  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (1.125137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.023640  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (1.068481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.023670  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.074687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.023920  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.024191  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:22.024230  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:22.024693  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.025152  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (1.167895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.026230  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.026655  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.074369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.028829  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (3.101596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.030089  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24/status: (2.897757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.030807  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (1.453189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.031466  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-24.15820e83f677e0f7: (4.529629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34014]
I0210 16:46:22.032354  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.19547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.032577  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.032728  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-0
I0210 16:46:22.032738  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-0
I0210 16:46:22.032821  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.032867  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.033784  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.353103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.036935  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (2.083819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.037348  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.443394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34016]
I0210 16:46:22.037465  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0/status: (4.192966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.038650  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.324494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.039126  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (1.304243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34016]
I0210 16:46:22.039204  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (1.338133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34014]
I0210 16:46:22.039585  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.039812  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:22.039841  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:22.039989  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.040038  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.040082  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.091904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.042335  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.855912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.043127  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.302105ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0210 16:46:22.043515  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (3.032208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33982]
I0210 16:46:22.044958  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (1.068845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0210 16:46:22.045191  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27/status: (4.638583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34016]
I0210 16:46:22.046795  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.450081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0210 16:46:22.047076  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.372438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34016]
I0210 16:46:22.047326  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.047465  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-2
I0210 16:46:22.047515  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-2
I0210 16:46:22.047595  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.047637  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.048968  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.47368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0210 16:46:22.049813  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.403146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34022]
I0210 16:46:22.050610  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (2.36497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34020]
I0210 16:46:22.051048  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2/status: (3.187104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34012]
I0210 16:46:22.052106  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (2.652397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0210 16:46:22.053071  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (1.15223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34020]
I0210 16:46:22.053441  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.054298  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9
I0210 16:46:22.054341  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9
I0210 16:46:22.054480  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.054563  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.054997  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.856043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0210 16:46:22.056199  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.089714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34022]
I0210 16:46:22.057001  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.670406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0210 16:46:22.057394  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9/status: (2.601247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34020]
I0210 16:46:22.057394  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.341642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34024]
I0210 16:46:22.059096  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.250321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0210 16:46:22.059101  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.258808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34022]
I0210 16:46:22.059372  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.059563  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:22.059581  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:22.059681  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.059735  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.061701  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (1.623186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0210 16:46:22.061708  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.651699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34022]
I0210 16:46:22.062306  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.061308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34030]
I0210 16:46:22.063811  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.750325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34018]
I0210 16:46:22.064748  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17/status: (3.331121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0210 16:46:22.065345  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.132535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34030]
I0210 16:46:22.066469  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (1.300721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0210 16:46:22.066705  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.066842  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:22.066859  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:22.066946  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.067069  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.067319  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.276365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34030]
I0210 16:46:22.069241  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (1.871009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0210 16:46:22.069256  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.531774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34032]
I0210 16:46:22.070548  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18/status: (2.504325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34022]
I0210 16:46:22.071483  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.020017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34032]
I0210 16:46:22.071989  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (1.07721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34022]
I0210 16:46:22.072251  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.072302  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (4.73982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34030]
I0210 16:46:22.072662  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-10
I0210 16:46:22.072684  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-10
I0210 16:46:22.072776  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.072833  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.074225  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (2.360915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34032]
I0210 16:46:22.075955  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.410228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0210 16:46:22.076689  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (3.258332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0210 16:46:22.077715  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10/status: (3.968629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34022]
I0210 16:46:22.077796  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (3.101422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34032]
I0210 16:46:22.079524  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.282723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0210 16:46:22.079700  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (1.501266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0210 16:46:22.080186  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.080381  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:22.080412  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:22.080537  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.080584  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.081150  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.164507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0210 16:46:22.083145  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (1.911255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34036]
I0210 16:46:22.084246  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.687582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0210 16:46:22.085337  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19/status: (4.414131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0210 16:46:22.085879  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.167916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0210 16:46:22.087451  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (1.560031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0210 16:46:22.087468  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.059253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0210 16:46:22.087762  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.087963  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15
I0210 16:46:22.087975  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15
I0210 16:46:22.088091  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.088132  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.088539  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.590735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34036]
I0210 16:46:22.090844  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (2.006128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34036]
I0210 16:46:22.091855  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-15.15820e83f4f10e9c: (2.426966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34040]
I0210 16:46:22.091978  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (3.767643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34034]
I0210 16:46:22.092042  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15/status: (3.341653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34028]
I0210 16:46:22.093614  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.047874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34036]
I0210 16:46:22.094709  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.85558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34038]
I0210 16:46:22.095122  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.095298  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-3
I0210 16:46:22.095311  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-3
I0210 16:46:22.095313  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.329387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34036]
I0210 16:46:22.095384  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.095436  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.097260  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.511449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34038]
I0210 16:46:22.098276  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (1.3188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34044]
I0210 16:46:22.098315  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3/status: (2.29199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0210 16:46:22.098685  123305 preemption_test.go:598] Cleaning up all pods...
I0210 16:46:22.098787  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (3.087145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34036]
I0210 16:46:22.099955  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (1.183319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0210 16:46:22.100161  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.100329  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:22.100347  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:22.100426  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.100598  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.103073  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (4.163216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34038]
I0210 16:46:22.104328  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.471071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34046]
I0210 16:46:22.104456  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.536747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34036]
I0210 16:46:22.106738  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25/status: (2.098127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0210 16:46:22.109209  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.477783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0210 16:46:22.109826  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.109962  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7
I0210 16:46:22.109987  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7
I0210 16:46:22.110081  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.110128  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.113476  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-7.15820e83f345cfcd: (2.197721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34048]
I0210 16:46:22.113741  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (3.297568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34046]
I0210 16:46:22.113750  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7/status: (3.371799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0210 16:46:22.114835  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (9.441025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34036]
I0210 16:46:22.116811  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (2.349511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0210 16:46:22.117123  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.117257  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:22.117273  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:22.117692  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.117763  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.119610  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (4.147987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34046]
I0210 16:46:22.119760  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (1.700751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0210 16:46:22.121362  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22/status: (2.892754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34048]
I0210 16:46:22.127701  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (2.011418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34048]
I0210 16:46:22.127974  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (8.093071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34046]
I0210 16:46:22.128104  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.128260  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:22.128271  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:22.128366  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.128416  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.132620  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (12.520568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0210 16:46:22.134559  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21/status: (4.472591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34048]
I0210 16:46:22.138406  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (8.447674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0210 16:46:22.146530  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (6.245296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34048]
I0210 16:46:22.146897  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.147110  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:22.147123  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:22.147257  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.147314  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.148317  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-21.15820e83f608b8f7: (17.413323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34052]
I0210 16:46:22.149773  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.114294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0210 16:46:22.150470  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23/status: (2.839319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34048]
I0210 16:46:22.152206  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (19.321999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0210 16:46:22.153924  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (4.797796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34052]
I0210 16:46:22.156019  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (5.198573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34048]
I0210 16:46:22.156378  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.156554  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12
I0210 16:46:22.156570  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12
I0210 16:46:22.156663  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.156712  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.159644  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (1.984133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0210 16:46:22.163314  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12/status: (6.351711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34052]
I0210 16:46:22.163742  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (9.714854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34042]
I0210 16:46:22.163785  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (5.301594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34100]
I0210 16:46:22.167028  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (1.880066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0210 16:46:22.167478  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.167824  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8
I0210 16:46:22.167837  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8
I0210 16:46:22.167927  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.167986  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.171027  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (1.884687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0210 16:46:22.171243  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-8.15820e83f3b6d177: (2.19296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34116]
I0210 16:46:22.172014  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (7.144893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34100]
I0210 16:46:22.178153  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (5.267639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34116]
I0210 16:46:22.182305  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8/status: (14.062057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0210 16:46:22.184010  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (1.262111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0210 16:46:22.184255  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.184375  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11
I0210 16:46:22.184383  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11
I0210 16:46:22.184468  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.184537  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.188394  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (2.777228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34126]
I0210 16:46:22.189839  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11/status: (4.869003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0210 16:46:22.191236  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-11.15820e83f40bcfa9: (6.109715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0210 16:46:22.193222  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (2.375876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0210 16:46:22.194471  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.195304  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:22.195323  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:22.195424  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.195476  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.198338  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20/status: (2.514482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0210 16:46:22.199894  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (3.076671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0210 16:46:22.201177  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (1.515512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0210 16:46:22.201244  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-20.15820e83f597953a: (3.465673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34138]
I0210 16:46:22.201463  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.201644  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14
I0210 16:46:22.201699  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14
I0210 16:46:22.201839  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.201905  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.202084  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (7.259086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34116]
I0210 16:46:22.203707  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (1.555132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0210 16:46:22.204103  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14/status: (1.891557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34112]
I0210 16:46:22.204252  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.692876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34116]
I0210 16:46:22.206202  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (1.035585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34050]
I0210 16:46:22.206537  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.206694  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16
I0210 16:46:22.206709  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16
I0210 16:46:22.206768  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.206805  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.208385  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (1.261338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34144]
I0210 16:46:22.210711  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16/status: (3.679309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34116]
I0210 16:46:22.211334  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (8.729456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0210 16:46:22.212876  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (1.405283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34116]
I0210 16:46:22.213830  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.215248  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16
I0210 16:46:22.215269  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16
I0210 16:46:22.215394  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.215445  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.215448  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (7.940697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34146]
I0210 16:46:22.217856  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (2.128863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34144]
I0210 16:46:22.218023  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16/status: (1.904375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34116]
I0210 16:46:22.218912  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (7.186064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0210 16:46:22.219411  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-16.15820e84143b9dd7: (3.194873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34146]
I0210 16:46:22.221223  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (2.116503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34116]
I0210 16:46:22.221904  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.222047  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:22.222066  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:22.222154  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.222215  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.225731  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28/status: (3.040083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34146]
I0210 16:46:22.228534  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-28.15820e84088b3e38: (4.989799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34152]
I0210 16:46:22.229285  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.728166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34146]
I0210 16:46:22.229619  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.229786  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:22.229804  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:22.229895  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.229957  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.232573  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (1.609837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34162]
I0210 16:46:22.236012  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-18.15820e840be6e8e1: (5.41748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34152]
I0210 16:46:22.238417  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18/status: (8.183079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34146]
I0210 16:46:22.238551  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (15.887057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34144]
I0210 16:46:22.238670  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (19.115132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0210 16:46:22.245085  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (5.982318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34152]
I0210 16:46:22.249110  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.249426  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:22.249447  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:22.249577  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.249640  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.251685  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (12.590306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0210 16:46:22.252299  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (2.169451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34162]
I0210 16:46:22.256334  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-22.15820e840eecd2a9: (5.535967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34182]
I0210 16:46:22.259798  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (6.137337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34142]
I0210 16:46:22.267520  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22/status: (5.520458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34152]
I0210 16:46:22.270964  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (2.739914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34152]
I0210 16:46:22.271316  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.271638  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:22.271660  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:22.271809  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.271876  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.275255  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (2.988946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34162]
I0210 16:46:22.277672  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (16.712494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34182]
I0210 16:46:22.278731  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25/status: (6.472995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34152]
I0210 16:46:22.278859  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-25.15820e840de50584: (4.537679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34188]
I0210 16:46:22.280711  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.240658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34188]
I0210 16:46:22.281049  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.281450  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:22.281597  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:22.281753  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.281938  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.283658  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (1.385152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34188]
I0210 16:46:22.286183  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-17.15820e840b778068: (3.400772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34190]
I0210 16:46:22.288399  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (9.689269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34182]
I0210 16:46:22.291870  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17/status: (9.570113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34162]
I0210 16:46:22.298345  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (3.261245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34162]
I0210 16:46:22.299228  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.299449  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:22.299512  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:22.299639  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.299713  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.300774  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (11.960986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34190]
I0210 16:46:22.301847  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.692649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34188]
I0210 16:46:22.302885  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23/status: (2.726151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34162]
I0210 16:46:22.304423  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-23.15820e8410afa8fa: (3.03613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.309213  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (5.928661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34162]
I0210 16:46:22.309668  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.309921  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:22.309969  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:22.310152  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.310232  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.311819  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (10.162199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34190]
I0210 16:46:22.313041  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (2.187972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.316762  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (4.27756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34190]
I0210 16:46:22.319001  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21/status: (8.068807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34188]
I0210 16:46:22.323356  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-21.15820e83f608b8f7: (12.062485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0210 16:46:22.325365  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (7.787267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34190]
I0210 16:46:22.325669  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (5.140958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34188]
I0210 16:46:22.325986  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.326667  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:22.326741  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:22.326929  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.327004  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.328735  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.422924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.331153  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27/status: (3.827208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34188]
I0210 16:46:22.335966  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (4.191618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34188]
I0210 16:46:22.336072  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-27.15820e840a4aef11: (7.664825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.336459  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.336614  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:22.336626  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:22.336703  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.336753  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.337018  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (11.107541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0210 16:46:22.338428  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20/status: (1.233451ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.338613  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (1.401174ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
W0210 16:46:22.339029  123305 factory.go:696] A pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20 no longer exists
I0210 16:46:22.340379  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (1.590944ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
E0210 16:46:22.340751  123305 scheduler.go:294] Error getting the updated preemptor pod object: pods "ppod-20" not found
I0210 16:46:22.341090  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:22.341125  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:22.342522  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-20.15820e83f597953a: (4.07735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34198]
I0210 16:46:22.345312  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.754691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.345720  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (8.335786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34194]
I0210 16:46:22.350971  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (4.798405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.357391  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:22.357439  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:22.359204  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.439645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.359834  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (8.292217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.364395  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:22.364438  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:22.365771  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (5.47482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.367274  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.997921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.369457  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:22.369517  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:22.371544  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.703629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.371972  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (5.843943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.378473  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (6.007023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.383102  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (4.28724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.391629  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:22.391762  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:22.391861  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:22.391917  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:22.391973  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:22.392037  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:22.395451  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.050516ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.397445  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (10.755547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.397774  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.852134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.401404  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.400231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.402826  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:22.402914  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:22.404661  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.431966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.405312  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:22.405446  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (6.455383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.406437  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:22.408700  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:22.409841  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:22.409892  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:22.411024  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (5.101464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.411587  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:22.411911  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.707712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.412012  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:22.414749  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:22.414786  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:22.416779  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (5.041164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.417957  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.667821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.419721  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:22.419780  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:22.421297  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (4.154885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.421793  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.6517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.424406  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:22.424446  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:22.426269  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (4.640545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.426284  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.562568ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.429324  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:22.429372  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:22.431398  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (4.730492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.431800  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.446479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.436532  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:22.436571  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:22.438053  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (5.822121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.438838  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.96532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.441933  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:22.442027  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:22.443645  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (5.091817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.445333  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.048547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.447224  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:22.447331  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:22.448827  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (4.845243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.449901  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.167691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.453254  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:22.453302  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:22.455100  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (5.444128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.457430  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.659524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.458555  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:22.458598  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:22.459883  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (4.235532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.462695  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.8537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.465944  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:22.466006  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:22.466578  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (6.130164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.468043  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.715073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.470235  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:22.470426  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:22.471833  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (4.913422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.472799  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.46316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.476455  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:22.476549  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:22.477691  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (5.514638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.478959  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.900839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.481096  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:22.481129  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:22.482442  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (4.040437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.482693  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.344847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.486012  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:22.486055  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:22.487385  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (4.594848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.487911  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.437063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.490310  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:22.490349  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:22.491908  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.269801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.492381  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (4.608518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.498750  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:22.498788  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:22.500330  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (7.540698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.500999  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.894673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.504264  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:22.504354  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:22.505326  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (4.397971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.507529  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.827826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.508982  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:22.509027  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:22.510677  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (5.016711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.510833  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.543583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.514834  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:22.514889  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:22.516787  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.530982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.517091  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (6.136721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.521738  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0: (4.203287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.523360  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (1.200273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.528324  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (4.553181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.531157  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (1.175743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.533768  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (946.167µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.536358  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (1.004083ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.539218  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (1.258311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.542009  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (1.187515ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.545304  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (1.602567ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.562431  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (15.372408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.579701  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (1.524703ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.583929  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (1.979634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.624043  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (38.609857ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.628262  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (1.897837ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.631293  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.315191ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.637822  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (1.499485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.640688  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (1.228231ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.643755  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (1.3321ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.647004  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.276607ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.651349  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (2.523167ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.655786  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (1.15372ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.659799  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (2.358024ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.663403  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (984.417µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.666349  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (1.268487ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.669181  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.066755ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.671900  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (1.213011ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.675114  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.263232ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.677784  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.018882ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.681928  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.308168ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.684799  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (1.172988ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.687539  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (995.78µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.690153  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.076926ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.692732  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.02648ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.695587  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (996.522µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.698141  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (971.056µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.700634  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (881.755µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.703191  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (915.714µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.705873  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.011334ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.708555  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (911.946µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.711104  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.015934ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.713988  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.284293ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.716444  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (882.503µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.719036  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (932.309µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.721550  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (900.069µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.724002  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (869.018µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.726833  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.164109ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.729335  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (956.188µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.732922  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.031273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.735599  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (937.826µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.738008  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (772.171µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.740314  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (842.862µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.742917  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (964.467µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.745416  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (963.163µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.747976  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0: (914.94µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.750541  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (934.291µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.753122  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (947.662µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.757543  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.684698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.757806  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0
I0210 16:46:22.757846  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0
I0210 16:46:22.757995  123305 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0", node "node1"
I0210 16:46:22.758014  123305 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0210 16:46:22.758061  123305 factory.go:733] Attempting to bind rpod-0 to node1
I0210 16:46:22.760135  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0/binding: (1.749131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.760350  123305 scheduler.go:571] pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0210 16:46:22.760638  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.548977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.760928  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1
I0210 16:46:22.760949  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1
I0210 16:46:22.761044  123305 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1", node "node1"
I0210 16:46:22.761062  123305 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0210 16:46:22.761099  123305 factory.go:733] Attempting to bind rpod-1 to node1
I0210 16:46:22.763341  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.599054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.764255  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1/binding: (2.920813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.764485  123305 scheduler.go:571] pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0210 16:46:22.766182  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.440638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.863397  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0: (1.854704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.966058  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (1.765836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.966406  123305 preemption_test.go:561] Creating the preemptor pod...
I0210 16:46:22.968613  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.909002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.968757  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:22.968777  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:22.968849  123305 preemption_test.go:567] Creating additional pods...
I0210 16:46:22.968914  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.968962  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.971207  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.007175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.971352  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.67576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34204]
I0210 16:46:22.971750  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/status: (2.205908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.974070  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.361716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34204]
I0210 16:46:22.974372  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.974711  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.449356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.974859  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.164707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.976877  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/status: (2.005451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34204]
I0210 16:46:22.976945  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.6517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34196]
I0210 16:46:22.978817  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.359131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.981324  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.451077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.981534  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (4.134218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34202]
I0210 16:46:22.981751  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:22.981771  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:22.981940  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.981992  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.983255  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.294685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34202]
I0210 16:46:22.984080  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/status: (1.595126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.986110  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.751314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34206]
I0210 16:46:22.986182  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/preemptor-pod.15820e8441a930fc: (2.32148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34202]
I0210 16:46:22.986466  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.043277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34192]
I0210 16:46:22.987043  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.82188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34208]
I0210 16:46:22.987440  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:22.987460  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:22.987628  123305 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod", node "node1"
I0210 16:46:22.987678  123305 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0210 16:46:22.987756  123305 factory.go:733] Attempting to bind preemptor-pod to node1
I0210 16:46:22.987796  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-5
I0210 16:46:22.987817  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-5
I0210 16:46:22.988000  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.988059  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.988462  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.581467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34202]
I0210 16:46:22.989518  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/binding: (1.418318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34208]
I0210 16:46:22.989672  123305 scheduler.go:571] pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0210 16:46:22.990727  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.988313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34210]
I0210 16:46:22.991085  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (1.666613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34212]
I0210 16:46:22.992870  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.75007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34210]
I0210 16:46:22.993223  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (4.298808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34202]
I0210 16:46:22.993241  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5/status: (4.609071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34206]
I0210 16:46:22.994906  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (1.140613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34208]
I0210 16:46:22.995179  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:22.995367  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-1
I0210 16:46:22.995391  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-1
I0210 16:46:22.995413  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.708833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34210]
I0210 16:46:22.995538  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:22.995590  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:22.997456  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (1.473004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34212]
I0210 16:46:22.998452  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.29615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34216]
I0210 16:46:22.998748  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1/status: (2.76348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34208]
I0210 16:46:22.999380  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.99416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34214]
I0210 16:46:23.000177  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (965.178µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34216]
I0210 16:46:23.000398  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.000544  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-5
I0210 16:46:23.000562  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-5
I0210 16:46:23.000652  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.000700  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.001669  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.54668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34214]
I0210 16:46:23.002743  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (1.797877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34212]
I0210 16:46:23.003811  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.620841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34214]
I0210 16:46:23.004281  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-5.15820e8442cc9020: (2.224534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0210 16:46:23.004902  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5/status: (3.936363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34216]
I0210 16:46:23.005977  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.698494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34214]
I0210 16:46:23.006938  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (1.204731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0210 16:46:23.007335  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.007527  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9
I0210 16:46:23.007541  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9
I0210 16:46:23.007621  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.007661  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.008357  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.563965ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34214]
I0210 16:46:23.009210  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.349039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0210 16:46:23.010249  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.997265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34220]
I0210 16:46:23.010620  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.426283ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34214]
I0210 16:46:23.010672  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9/status: (2.282552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34212]
I0210 16:46:23.012342  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.301167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34220]
I0210 16:46:23.012735  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.013276  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-0
I0210 16:46:23.013296  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-0
I0210 16:46:23.013413  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.013462  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.013690  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.376376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0210 16:46:23.017555  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.348934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34224]
I0210 16:46:23.018009  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.848701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0210 16:46:23.018744  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0/status: (4.9971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34220]
I0210 16:46:23.018801  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (4.125199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34222]
I0210 16:46:23.020219  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.053844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34224]
I0210 16:46:23.020303  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (1.085328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34218]
I0210 16:46:23.020590  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.020750  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-2
I0210 16:46:23.020769  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-2
I0210 16:46:23.020889  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.020939  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.022212  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.534576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34224]
I0210 16:46:23.022973  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.505173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34230]
I0210 16:46:23.023536  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2/status: (2.313043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34226]
I0210 16:46:23.023628  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (2.25152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34228]
I0210 16:46:23.024243  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.612222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34224]
I0210 16:46:23.025558  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (1.026995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34226]
I0210 16:46:23.025797  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.026051  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8
I0210 16:46:23.026065  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8
I0210 16:46:23.026198  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.026253  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.026299  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.477704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34224]
I0210 16:46:23.029071  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (2.622359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34226]
I0210 16:46:23.029200  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.192423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34232]
I0210 16:46:23.029438  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8/status: (2.946809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34230]
I0210 16:46:23.029719  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.085771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34224]
I0210 16:46:23.031242  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.588571ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34232]
I0210 16:46:23.032317  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (2.268376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34230]
I0210 16:46:23.032653  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.032787  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-1
I0210 16:46:23.032798  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-1
I0210 16:46:23.032882  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.032929  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.036617  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (2.932244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34226]
I0210 16:46:23.036845  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1/status: (3.675517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34232]
I0210 16:46:23.038250  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-1.15820e84433f7fad: (3.790803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34234]
I0210 16:46:23.040006  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (2.405238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34232]
I0210 16:46:23.040340  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.757095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34226]
I0210 16:46:23.040668  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.040820  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-4
I0210 16:46:23.040842  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-4
I0210 16:46:23.040920  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.040971  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.043135  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.350051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0210 16:46:23.043386  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4/status: (2.192656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34234]
I0210 16:46:23.043972  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (2.049189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34236]
I0210 16:46:23.045244  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (1.121432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34234]
I0210 16:46:23.045535  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.045677  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-3
I0210 16:46:23.045709  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-3
I0210 16:46:23.045793  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.045833  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.048441  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (4.104167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34226]
I0210 16:46:23.049836  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.409049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34240]
I0210 16:46:23.049854  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3/status: (3.608804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34236]
I0210 16:46:23.050567  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (4.40674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0210 16:46:23.051705  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.552882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34226]
I0210 16:46:23.051836  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (1.213411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34240]
I0210 16:46:23.052142  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.052349  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-6
I0210 16:46:23.052369  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-6
I0210 16:46:23.052479  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.052547  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.054925  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.712501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0210 16:46:23.055158  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (2.030899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34242]
I0210 16:46:23.055211  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.836289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34244]
I0210 16:46:23.056567  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6/status: (3.533731ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34236]
I0210 16:46:23.057873  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.747345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34242]
I0210 16:46:23.058137  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (1.167022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34236]
I0210 16:46:23.058483  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.058660  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7
I0210 16:46:23.058683  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7
I0210 16:46:23.058812  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.058871  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.059993  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.679257ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34244]
I0210 16:46:23.060816  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (1.661675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0210 16:46:23.062240  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.808096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34246]
I0210 16:46:23.063450  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7/status: (4.237581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34236]
I0210 16:46:23.064054  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.108007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34244]
I0210 16:46:23.066311  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (2.278878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34246]
I0210 16:46:23.066565  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.066748  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7
I0210 16:46:23.066768  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7
I0210 16:46:23.066939  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.066998  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.067179  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.676412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34244]
I0210 16:46:23.069176  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (1.818499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0210 16:46:23.069807  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.756724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34244]
I0210 16:46:23.069964  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7/status: (2.6446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34246]
I0210 16:46:23.070914  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-7.15820e844705131e: (2.633246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34248]
I0210 16:46:23.071703  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.353686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34244]
I0210 16:46:23.071714  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (1.360629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34238]
I0210 16:46:23.071937  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.072069  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:23.072089  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:23.072156  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.072211  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.074157  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.779778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34248]
I0210 16:46:23.074778  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.948796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34252]
I0210 16:46:23.075887  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29/status: (3.181392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34250]
I0210 16:46:23.076314  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.818996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34244]
I0210 16:46:23.078766  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.197986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34250]
I0210 16:46:23.079039  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.079286  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:23.079306  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:23.079401  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.079445  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.081273  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.578436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34250]
I0210 16:46:23.081734  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.714309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34254]
I0210 16:46:23.082229  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32/status: (2.492467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34248]
I0210 16:46:23.082545  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (5.183039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34252]
I0210 16:46:23.084062  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.141704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34254]
I0210 16:46:23.084397  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.084562  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.614777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34250]
I0210 16:46:23.084637  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:23.084652  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:23.085885  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.085976  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.088779  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.84197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34250]
I0210 16:46:23.088781  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.126845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34258]
I0210 16:46:23.088798  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (2.46723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34254]
I0210 16:46:23.089555  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33/status: (2.986965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34256]
I0210 16:46:23.091302  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.389012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34256]
I0210 16:46:23.091307  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.589827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34250]
I0210 16:46:23.091577  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.091816  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:23.091831  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:23.091918  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.091974  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.093682  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.842285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34256]
I0210 16:46:23.095224  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32/status: (2.961935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34254]
I0210 16:46:23.096958  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (2.995421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34260]
I0210 16:46:23.097419  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.545378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34256]
I0210 16:46:23.097757  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.536696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34254]
I0210 16:46:23.098118  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.098291  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:23.098310  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:23.098434  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.098478  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.098694  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-32.15820e84483f06c0: (4.780331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.099839  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.159307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34254]
I0210 16:46:23.100378  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37/status: (1.685114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34260]
I0210 16:46:23.100710  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.580498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.101440  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.536855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34256]
I0210 16:46:23.102004  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.14508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34260]
I0210 16:46:23.102258  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.102860  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:23.102880  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:23.102949  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.103004  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.104306  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.306161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.104783  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.431501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34254]
I0210 16:46:23.105140  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39/status: (1.92272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34260]
I0210 16:46:23.107265  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.457634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.107324  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.875592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34260]
I0210 16:46:23.107324  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.883136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34254]
I0210 16:46:23.108019  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.108154  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:23.108181  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:23.108307  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.108350  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.109867  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.707543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.109952  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.180692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34260]
I0210 16:46:23.110274  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.400383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34264]
I0210 16:46:23.111464  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40/status: (2.5732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34266]
I0210 16:46:23.111589  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.325748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34260]
I0210 16:46:23.113053  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.090875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34264]
I0210 16:46:23.113436  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.114179  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:23.114200  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:23.114401  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.114453  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.114836  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.630839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.116752  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.426591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.116976  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.599587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34264]
I0210 16:46:23.117236  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.50517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34268]
I0210 16:46:23.118625  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42/status: (2.196438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34270]
I0210 16:46:23.119883  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.018435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34264]
I0210 16:46:23.121252  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.320723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34270]
I0210 16:46:23.121593  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.121728  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:23.121749  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:23.121846  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.121891  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.123417  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.4502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34264]
I0210 16:46:23.123518  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.176985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.123964  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45/status: (1.87802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34270]
I0210 16:46:23.125820  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.106965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34270]
I0210 16:46:23.126111  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.455555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34272]
I0210 16:46:23.126181  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.126352  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:23.126391  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:23.126594  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.126649  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.126729  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.76651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.127953  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (911.404µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.128386  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.454626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34270]
I0210 16:46:23.128826  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47/status: (1.803068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34264]
I0210 16:46:23.130363  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.105392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34270]
I0210 16:46:23.130598  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.130763  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:23.130776  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:23.130865  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.130907  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.133051  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45/status: (1.913661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34270]
I0210 16:46:23.133256  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.194727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.134570  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-45.15820e844ac6b03e: (2.768388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34274]
I0210 16:46:23.135253  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.192943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34270]
I0210 16:46:23.136525  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.136723  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:23.136740  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:23.136897  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.136951  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.138644  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.414176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.139542  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47/status: (2.310747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34274]
I0210 16:46:23.140230  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-47.15820e844b0f4938: (2.421177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34276]
I0210 16:46:23.141058  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.081065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34274]
I0210 16:46:23.141355  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.141621  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:23.141641  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:23.141760  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.141810  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.144150  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (2.025127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.144365  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.974256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34278]
I0210 16:46:23.144762  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49/status: (2.680039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34276]
I0210 16:46:23.146442  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (1.105391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34278]
I0210 16:46:23.146699  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.146890  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:23.146906  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:23.146999  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.147059  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.148450  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.156486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.149037  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48/status: (1.740467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34278]
I0210 16:46:23.149148  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.386608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34280]
I0210 16:46:23.150593  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.10286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34278]
I0210 16:46:23.150869  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.151067  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:23.151082  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:23.151214  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.151267  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.153132  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (1.474858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.154340  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49/status: (2.843116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34280]
I0210 16:46:23.155787  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (1.099514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34280]
I0210 16:46:23.156049  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.156185  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:23.156211  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-49.15820e844bf6a33f: (3.775995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34282]
I0210 16:46:23.156220  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:23.156429  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.156566  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.158608  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.18117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.159885  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48/status: (3.086918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34280]
I0210 16:46:23.161084  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-48.15820e844c46b12a: (3.132261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34284]
I0210 16:46:23.161673  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.299168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34280]
I0210 16:46:23.162033  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.162222  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:23.162239  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:23.162359  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.162411  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.163770  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.098308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.165233  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.223408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34286]
I0210 16:46:23.165412  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46/status: (2.755881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34284]
I0210 16:46:23.167003  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.148054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34286]
I0210 16:46:23.167271  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.167445  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:23.167462  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:23.167583  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.167635  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.169384  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.366754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.169773  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42/status: (1.909213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34286]
I0210 16:46:23.171033  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-42.15820e844a551849: (2.360045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34288]
I0210 16:46:23.171343  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.012009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34286]
I0210 16:46:23.171594  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.171780  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:23.171795  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:23.171905  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.171958  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.174044  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.820944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.174979  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46/status: (2.781159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34288]
I0210 16:46:23.175513  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-46.15820e844d30f9ad: (2.838041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34290]
I0210 16:46:23.176891  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.104267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34288]
I0210 16:46:23.177187  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.177371  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:23.177392  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:23.177564  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.177628  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.179043  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.168383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.179932  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.49393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34292]
I0210 16:46:23.180053  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44/status: (2.197352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34288]
I0210 16:46:23.182127  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.563979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34292]
I0210 16:46:23.182398  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.182630  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:23.182649  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:23.182742  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.182800  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.184399  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.34288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.185319  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.908661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34294]
I0210 16:46:23.185364  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43/status: (2.319056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34292]
I0210 16:46:23.187060  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.212517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34294]
I0210 16:46:23.187346  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.187530  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:23.187550  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:23.187679  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.187728  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.189934  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.937324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.190763  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44/status: (2.726326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34294]
I0210 16:46:23.191966  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-44.15820e844e19122e: (3.202111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0210 16:46:23.192264  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.067323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34294]
I0210 16:46:23.192586  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.192742  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:23.192762  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:23.192841  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.192882  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.195822  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43/status: (2.620711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0210 16:46:23.197138  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (3.842303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.198131  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-43.15820e844e67f728: (3.88577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34298]
I0210 16:46:23.198204  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.246259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0210 16:46:23.198437  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.198664  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:23.198686  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:23.198803  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.198849  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.200821  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40/status: (1.743218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0210 16:46:23.200900  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.426211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.202085  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-40.15820e8449f81a18: (2.358559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34300]
I0210 16:46:23.202301  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.058961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34262]
I0210 16:46:23.202555  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.202748  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:23.202771  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:23.202977  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.203038  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.205288  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.976496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0210 16:46:23.206659  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39/status: (3.373913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34300]
I0210 16:46:23.208113  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.088337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34300]
I0210 16:46:23.208260  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-39.15820e8449a673b8: (3.172942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34302]
I0210 16:46:23.208377  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.208529  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:23.208549  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:23.208674  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.208723  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.210086  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.092145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0210 16:46:23.210845  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.526061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34304]
I0210 16:46:23.210974  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41/status: (2.013925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34300]
I0210 16:46:23.212526  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.098146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34300]
I0210 16:46:23.212805  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.212989  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:23.213008  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:23.213100  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.213148  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.214922  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.254668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0210 16:46:23.216455  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37/status: (2.739929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34300]
I0210 16:46:23.216806  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-37.15820e844961738a: (2.704395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0210 16:46:23.218382  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.268203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34300]
I0210 16:46:23.218761  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.218960  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:23.218986  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:23.219122  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.219237  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.220696  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.267808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0210 16:46:23.222573  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41/status: (3.075771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0210 16:46:23.223541  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-41.15820e844ff3ac69: (3.477702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34308]
I0210 16:46:23.224128  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.097792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34296]
I0210 16:46:23.224414  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.224617  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:23.224633  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:23.224742  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.224788  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.226447  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.369989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0210 16:46:23.227544  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38/status: (2.52901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34308]
I0210 16:46:23.228453  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.770585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34310]
I0210 16:46:23.228942  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.684153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0210 16:46:23.229791  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.351273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34308]
I0210 16:46:23.230085  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.230239  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:23.230258  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:23.230412  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.230470  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.232529  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.687101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0210 16:46:23.233286  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33/status: (2.499448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34310]
I0210 16:46:23.234848  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-33.15820e8448a2a749: (3.719954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34312]
I0210 16:46:23.235332  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.46444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34310]
I0210 16:46:23.235657  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.235795  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:23.235808  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:23.235886  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.235931  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.238425  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.764355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0210 16:46:23.238610  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38/status: (1.940043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34312]
I0210 16:46:23.239274  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-38.15820e8450e8d4fc: (2.160529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34314]
I0210 16:46:23.240194  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.168209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34312]
I0210 16:46:23.240458  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.240661  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:23.240734  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:23.240866  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.240922  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.242351  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.158836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0210 16:46:23.243075  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.661658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0210 16:46:23.243986  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36/status: (2.783031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34314]
I0210 16:46:23.245759  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.19024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0210 16:46:23.245999  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.246144  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:23.246160  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:23.246274  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.246316  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.248657  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.743074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0210 16:46:23.248764  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35/status: (2.199886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0210 16:46:23.248837  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (2.180944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34306]
I0210 16:46:23.250399  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.271762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0210 16:46:23.250735  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.250885  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:23.250900  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:23.251015  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.251066  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.252477  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.147892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0210 16:46:23.253089  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36/status: (1.793074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0210 16:46:23.255285  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.376812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0210 16:46:23.255632  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.255748  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-36.15820e8451dee45d: (3.181439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34320]
I0210 16:46:23.255929  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:23.255945  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:23.256041  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.256118  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.258921  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (2.379959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0210 16:46:23.259904  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-35.15820e845231487e: (2.979154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34322]
I0210 16:46:23.260125  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35/status: (3.774117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0210 16:46:23.262077  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.435561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0210 16:46:23.276663  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.276923  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:23.276944  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:23.277043  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.277096  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.280894  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.474726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0210 16:46:23.281691  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.879599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34324]
I0210 16:46:23.282934  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34/status: (5.336532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34316]
I0210 16:46:23.284731  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.314851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34324]
I0210 16:46:23.285037  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.285224  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:23.285244  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:23.285340  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.285418  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.287568  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29/status: (1.887823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34324]
I0210 16:46:23.287576  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.32646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0210 16:46:23.289986  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-29.15820e8447d0a837: (3.690915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34326]
I0210 16:46:23.290097  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.388399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34324]
I0210 16:46:23.290402  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.290583  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:23.290598  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:23.290708  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.290760  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.292252  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.196396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0210 16:46:23.293259  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34/status: (2.196316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34326]
I0210 16:46:23.294059  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-34.15820e845406eb1e: (2.653745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34328]
I0210 16:46:23.295514  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.693374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34326]
I0210 16:46:23.295773  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.295919  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:23.295937  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:23.296044  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.296088  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.297708  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.347906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0210 16:46:23.298127  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.446927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0210 16:46:23.298185  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31/status: (1.830815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34328]
I0210 16:46:23.300266  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.637068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0210 16:46:23.300598  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.300772  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:23.300793  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:23.300906  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.301020  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.303091  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.789908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0210 16:46:23.303749  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (2.498913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0210 16:46:23.304392  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30/status: (2.743351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34332]
I0210 16:46:23.306074  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.142065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0210 16:46:23.306313  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.306475  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:23.306546  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:23.306662  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.306709  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.308805  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.182069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0210 16:46:23.309365  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31/status: (2.394982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0210 16:46:23.311266  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-31.15820e845528bb91: (2.905576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0210 16:46:23.311317  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.496242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34318]
I0210 16:46:23.311698  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.311924  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:23.311958  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:23.312088  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.312155  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.314325  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.928242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0210 16:46:23.314986  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30/status: (2.587938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0210 16:46:23.316894  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.181493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0210 16:46:23.317692  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.317831  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13
I0210 16:46:23.317846  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13
I0210 16:46:23.317901  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-30.15820e845573ff0c: (2.423995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0210 16:46:23.317929  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.317970  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.319732  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (1.033708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0210 16:46:23.320272  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13/status: (2.032898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0210 16:46:23.321200  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.180834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34336]
I0210 16:46:23.321947  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (1.156371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34330]
I0210 16:46:23.322210  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.322382  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:23.322414  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:23.322545  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.322593  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.324920  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (2.048378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0210 16:46:23.325098  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28/status: (2.278297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34336]
I0210 16:46:23.325651  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.445913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34338]
I0210 16:46:23.330523  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (3.327311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34336]
I0210 16:46:23.330841  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.373625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34338]
I0210 16:46:23.331275  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.331314  123305 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0210 16:46:23.331536  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13
I0210 16:46:23.331554  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13
I0210 16:46:23.331668  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.331726  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.334068  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (1.646729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I0210 16:46:23.334621  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (2.908786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34336]
I0210 16:46:23.338604  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13/status: (6.474061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0210 16:46:23.339537  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (1.80918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34340]
I0210 16:46:23.341353  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (1.401656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0210 16:46:23.342304  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (2.190406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34336]
I0210 16:46:23.343188  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.343376  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:23.343401  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:23.344284  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.344344  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.345177  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (1.958458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0210 16:46:23.346307  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.396822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34342]
I0210 16:46:23.347204  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (1.573371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0210 16:46:23.347956  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28/status: (3.266133ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34336]
I0210 16:46:23.349093  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-13.15820e8456768e43: (3.746685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34344]
I0210 16:46:23.350897  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.274586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0210 16:46:23.351588  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.351846  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11
I0210 16:46:23.351887  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11
I0210 16:46:23.351848  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (1.96052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34344]
I0210 16:46:23.353856  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-28.15820e8456bd296a: (3.609966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34342]
I0210 16:46:23.355263  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (2.281429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34344]
I0210 16:46:23.355931  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.356049  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.357074  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (1.192992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34342]
I0210 16:46:23.357834  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.196097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0210 16:46:23.359310  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.297502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34348]
I0210 16:46:23.359539  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11/status: (2.641339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34346]
I0210 16:46:23.360269  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (2.678329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34342]
I0210 16:46:23.361301  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.289331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34346]
I0210 16:46:23.361708  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.362024  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.264322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34342]
I0210 16:46:23.362303  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:23.362343  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:23.362464  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.362584  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.366046  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.762186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34352]
I0210 16:46:23.366139  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (3.454697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34346]
I0210 16:46:23.368240  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (4.076048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34334]
I0210 16:46:23.368291  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26/status: (3.687761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I0210 16:46:23.368456  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.900971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34346]
I0210 16:46:23.369790  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (1.043693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I0210 16:46:23.370096  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.370213  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (1.19953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34346]
I0210 16:46:23.370274  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11
I0210 16:46:23.370297  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11
I0210 16:46:23.370417  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.370532  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.372393  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.663548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I0210 16:46:23.373031  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11/status: (2.190793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34352]
I0210 16:46:23.374273  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (2.185997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0210 16:46:23.374407  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-11.15820e8458baab80: (2.866542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.375411  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.843695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34352]
I0210 16:46:23.375685  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.375835  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:23.375856  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:23.375945  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.376001  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.376115  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (1.475487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0210 16:46:23.378101  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.561435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0210 16:46:23.378362  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26/status: (2.121758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.379447  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (1.093357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0210 16:46:23.380326  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (1.200877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.381696  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.381855  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (1.68508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34358]
I0210 16:46:23.381910  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:23.381930  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:23.381883  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-26.15820e84591f2d05: (5.014211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34350]
I0210 16:46:23.382030  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.382073  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.383752  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.432244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.383890  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (1.087558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0210 16:46:23.385219  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.993057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34360]
I0210 16:46:23.385792  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (1.584612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0210 16:46:23.386695  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25/status: (2.640634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.387303  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (1.154481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0210 16:46:23.388606  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.490141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.388811  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.388947  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8
I0210 16:46:23.388973  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8
I0210 16:46:23.389070  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.389085  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (1.417005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0210 16:46:23.389105  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.390540  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (1.191976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34360]
I0210 16:46:23.391303  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.458903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34362]
I0210 16:46:23.391355  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8/status: (1.992729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.391833  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-8.15820e8445135869: (2.206203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0210 16:46:23.392904  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (1.140536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.393138  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.393251  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (1.210405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34360]
I0210 16:46:23.393300  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:23.393310  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:23.393439  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.393525  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.394886  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.182036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0210 16:46:23.394915  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.224467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.396396  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.123895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.396608  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25/status: (2.266509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34364]
I0210 16:46:23.397402  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-25.15820e845a48c8c5: (2.744858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0210 16:46:23.398338  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.503814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.398360  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.173593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34356]
I0210 16:46:23.398639  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.398790  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:23.398851  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:23.398995  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.399051  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.400912  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (2.176978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.401150  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (995.734µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34368]
I0210 16:46:23.401596  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.549567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0210 16:46:23.402437  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.139029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34354]
I0210 16:46:23.402571  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20/status: (1.936905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.404524  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.041662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0210 16:46:23.404748  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (1.015213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.405002  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.405150  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14
I0210 16:46:23.405180  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14
I0210 16:46:23.405252  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.405322  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.405534  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:23.406687  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:23.407981  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (3.08415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0210 16:46:23.408451  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.51703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34372]
I0210 16:46:23.408672  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (2.276866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.408898  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:23.409380  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14/status: (3.759599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34368]
I0210 16:46:23.409949  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.159187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0210 16:46:23.411064  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (1.164864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.411362  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.411424  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.053085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34366]
I0210 16:46:23.411862  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:23.411961  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:23.411982  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:23.412099  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.412143  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.412180  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:23.413035  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.222183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.414221  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20/status: (1.865563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34372]
I0210 16:46:23.414924  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (2.051575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34374]
I0210 16:46:23.414939  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.409883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.416396  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.135821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.417939  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.117985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.418519  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (1.187514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34372]
I0210 16:46:23.418764  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.418994  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14
I0210 16:46:23.419017  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14
I0210 16:46:23.419135  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.419272  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.419426  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.091036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.421265  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (1.151335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.421422  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.030081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0210 16:46:23.421860  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14/status: (2.33011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34372]
I0210 16:46:23.423477  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-20.15820e845b4bd7f7: (10.535893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34376]
I0210 16:46:23.423584  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.720434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0210 16:46:23.424220  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (1.944782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34372]
I0210 16:46:23.424484  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.424652  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:23.424665  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:23.424756  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.424807  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.425298  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.225658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0210 16:46:23.426582  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (1.527045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.426856  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.104777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0210 16:46:23.427574  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18/status: (2.347538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34380]
I0210 16:46:23.427575  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-14.15820e845bab704a: (3.169255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34376]
I0210 16:46:23.428567  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.317024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0210 16:46:23.429443  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (1.197815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.429763  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.429913  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-0
I0210 16:46:23.429934  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-0
I0210 16:46:23.429961  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.757772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34380]
I0210 16:46:23.430073  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.430114  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.431797  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (2.200063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0210 16:46:23.433451  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (1.787046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34380]
I0210 16:46:23.433924  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0/status: (3.529344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.434764  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.663934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34378]
I0210 16:46:23.436368  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-0.15820e8444503588: (4.331845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34382]
I0210 16:46:23.436392  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.221752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.437525  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (1.867269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34380]
I0210 16:46:23.437773  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.437980  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:23.437999  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:23.438254  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.438324  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.438414  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.19738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34382]
I0210 16:46:23.440139  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.269723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34382]
I0210 16:46:23.440846  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18/status: (2.277447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34380]
I0210 16:46:23.440851  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (2.243682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.441909  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-18.15820e845cd4ce2f: (2.76522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34384]
I0210 16:46:23.442620  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.908764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34382]
I0210 16:46:23.442782  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (1.496303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34370]
I0210 16:46:23.443020  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.443197  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16
I0210 16:46:23.443215  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16
I0210 16:46:23.443317  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.443361  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.444273  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.110029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34384]
I0210 16:46:23.445764  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.81127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34386]
I0210 16:46:23.445888  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (1.160168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34384]
I0210 16:46:23.447219  123305 preemption_test.go:598] Cleaning up all pods...
I0210 16:46:23.447358  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (1.783899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.447426  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16/status: (3.497211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34380]
I0210 16:46:23.449728  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (1.190278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34380]
I0210 16:46:23.450277  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.450447  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15
I0210 16:46:23.450464  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15
I0210 16:46:23.450709  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.450764  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.452136  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.068992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.453208  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (5.737398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34384]
I0210 16:46:23.453480  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15/status: (2.293233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34386]
I0210 16:46:23.453846  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.472634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34392]
I0210 16:46:23.455892  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.577237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34386]
I0210 16:46:23.457191  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.457472  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16
I0210 16:46:23.457522  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16
I0210 16:46:23.457657  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.457704  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.459126  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (1.059882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.459874  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (6.381587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34384]
I0210 16:46:23.461555  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-16.15820e845defed11: (2.941332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34394]
I0210 16:46:23.461929  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16/status: (3.958419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34392]
I0210 16:46:23.464195  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (1.077243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34394]
I0210 16:46:23.464649  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.464807  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15
I0210 16:46:23.464825  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15
I0210 16:46:23.464913  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.464970  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.465056  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (4.849088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34384]
I0210 16:46:23.466762  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.533754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34394]
I0210 16:46:23.467940  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15/status: (2.750673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.468597  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-15.15820e845e60e437: (2.826052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34396]
I0210 16:46:23.469757  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (4.321527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34384]
I0210 16:46:23.470350  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.601957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.470609  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.470788  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9
I0210 16:46:23.470805  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9
I0210 16:46:23.470898  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.470936  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.472588  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.451954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.473239  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9/status: (2.023357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34394]
I0210 16:46:23.474962  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (4.574601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34396]
I0210 16:46:23.476136  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-9.15820e8443f7b38c: (3.891464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34398]
I0210 16:46:23.476361  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.743291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34394]
I0210 16:46:23.476643  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.476827  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:23.476841  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:23.476920  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.476978  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.479672  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (2.422883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.479839  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19/status: (2.586177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34398]
I0210 16:46:23.482017  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (1.674169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34398]
I0210 16:46:23.482124  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (6.047614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34396]
I0210 16:46:23.482125  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.166477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34400]
I0210 16:46:23.482366  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.482590  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-10
I0210 16:46:23.482627  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-10
I0210 16:46:23.482804  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.482851  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.485613  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (2.542916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.487382  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.758908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34402]
I0210 16:46:23.488512  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10/status: (3.187315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34404]
I0210 16:46:23.489526  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (6.912748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34398]
I0210 16:46:23.490934  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (1.872698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34402]
I0210 16:46:23.491220  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.491429  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:23.491446  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:23.491554  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.491603  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.493828  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.658352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34406]
I0210 16:46:23.493884  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (2.013821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.494069  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17/status: (2.182322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34402]
I0210 16:46:23.496953  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (2.339524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.497280  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.497462  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12
I0210 16:46:23.497501  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12
I0210 16:46:23.497613  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.497674  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.497698  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (7.838294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34398]
I0210 16:46:23.498912  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (1.083379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.500335  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.989152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34408]
I0210 16:46:23.501608  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12/status: (3.50199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34406]
I0210 16:46:23.504525  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (2.516013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34408]
I0210 16:46:23.504805  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.504861  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (6.918042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34398]
I0210 16:46:23.504984  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:23.505003  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:23.505103  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.505156  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.508090  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17/status: (2.419502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34410]
I0210 16:46:23.508239  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (2.834118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.510653  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-17.15820e8460d00bac: (4.781796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34412]
I0210 16:46:23.510808  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (2.070266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I0210 16:46:23.511839  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (6.503787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34408]
I0210 16:46:23.511897  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.512328  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12
I0210 16:46:23.512349  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12
I0210 16:46:23.512424  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.512535  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.514681  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (1.522904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34410]
I0210 16:46:23.517058  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-12.15820e84612c71cf: (3.683025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34416]
I0210 16:46:23.518400  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12/status: (3.093673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34414]
I0210 16:46:23.521019  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (2.054636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34416]
I0210 16:46:23.521358  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.521459  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (9.14051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34412]
I0210 16:46:23.522206  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:23.522229  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:23.522344  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.522394  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.524159  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.456074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34410]
I0210 16:46:23.525198  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23/status: (1.924578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34418]
I0210 16:46:23.526033  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.578312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34420]
I0210 16:46:23.526920  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.195332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34418]
I0210 16:46:23.527465  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.527663  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:23.527681  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:23.527805  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.527866  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.529779  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (7.375021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34416]
I0210 16:46:23.531321  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.859003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.531321  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (3.193056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34420]
I0210 16:46:23.532819  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22/status: (4.610962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34410]
I0210 16:46:23.535880  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (1.186391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34420]
I0210 16:46:23.535946  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (5.704329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34416]
I0210 16:46:23.536529  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.536683  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:23.536697  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:23.536832  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.536876  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.539541  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (2.134634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34424]
I0210 16:46:23.540586  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23/status: (3.439867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.540898  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-23.15820e8462a5e01b: (3.164128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34426]
I0210 16:46:23.542307  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (5.911726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34420]
I0210 16:46:23.542961  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.642425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.543349  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.543508  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:23.543521  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:23.543890  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.543941  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.546247  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22/status: (2.057249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34424]
I0210 16:46:23.546349  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (2.105765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.548305  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (5.186728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34420]
I0210 16:46:23.548843  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-22.15820e8462f93918: (3.791199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0210 16:46:23.549380  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (2.456635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34424]
I0210 16:46:23.549645  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.549820  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:23.549864  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:23.549994  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.550039  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.551336  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (993.105µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0210 16:46:23.553041  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21/status: (2.546627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.553116  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (4.412676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34420]
I0210 16:46:23.553608  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.955953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0210 16:46:23.555925  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (2.26016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34430]
I0210 16:46:23.556364  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.556608  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:23.556629  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:23.556750  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.556822  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.558253  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.184331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34430]
I0210 16:46:23.559418  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.416325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34432]
I0210 16:46:23.559424  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (5.85442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.559675  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27/status: (2.548857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0210 16:46:23.562289  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (2.170284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0210 16:46:23.562886  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.563020  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:23.563043  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:23.563226  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.563292  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.565240  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (5.524119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.565669  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.982195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0210 16:46:23.566298  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21/status: (2.608266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34430]
I0210 16:46:23.568344  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-21.15820e84644bb7fc: (4.254023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34434]
I0210 16:46:23.568636  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.339878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34430]
I0210 16:46:23.568931  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.569079  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:23.569094  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:23.569178  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.569218  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.570197  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (4.28573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.570711  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.112595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0210 16:46:23.571406  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27/status: (1.812522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34434]
I0210 16:46:23.573481  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-27.15820e8464b32207: (3.127004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34436]
I0210 16:46:23.573698  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.656459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34434]
I0210 16:46:23.573979  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.574147  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:23.574234  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:23.574421  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:23.574504  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:23.575765  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (5.109946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.576947  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.845017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0210 16:46:23.577032  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24/status: (2.233322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34436]
I0210 16:46:23.577956  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.941607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.579619  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.299313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34428]
I0210 16:46:23.579931  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:23.580067  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:23.580109  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:23.580940  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (4.747775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34438]
I0210 16:46:23.581783  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.394929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.583785  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:23.583863  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:23.585671  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.507109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.586175  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (4.704416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34438]
I0210 16:46:23.589072  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:23.589146  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:23.590390  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (3.782519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.590875  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.425141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.594035  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:23.594071  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:23.596470  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.050525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.596714  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (5.977631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.599784  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:23.599843  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:23.601127  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (4.058026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.601555  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.456411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.603876  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:23.603916  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:23.605112  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (3.574665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.606330  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.130548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.608864  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:23.608906  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:23.610533  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.373464ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.610582  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (4.23396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.613421  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:23.613457  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:23.614823  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (3.907085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.615999  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.23993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.618570  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:23.618626  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:23.620394  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.511408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.621228  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (5.476914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.627335  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:23.627433  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:23.628911  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (6.555474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.629263  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.485031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.632540  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:23.632575  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:23.634115  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (4.762728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.634549  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.687146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.637790  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:23.637876  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:23.640261  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (5.731308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.640519  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.327347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.643052  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:23.643089  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:23.644676  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (4.022043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.644916  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.576129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.648862  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:23.648898  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:23.650851  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.588361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.651360  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (6.240877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.654459  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:23.654523  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:23.655901  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (4.099626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.656327  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.575213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.659241  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:23.659308  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:23.659843  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (3.600752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.662053  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.422459ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.664701  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:23.664745  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:23.666063  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (4.108743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.667410  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.399437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.669647  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:23.669684  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:23.670968  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (4.226853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.671460  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.52881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.673960  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:23.674004  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:23.675409  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (4.047454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.675987  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.691976ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.678446  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:23.678517  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:23.679944  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (4.082036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.680299  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.53297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.682965  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:23.683006  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:23.684399  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (4.04673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.684680  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.422941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.688105  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:23.688215  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:23.690294  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (5.489868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.691775  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.892173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.693152  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:23.693540  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:23.694428  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (3.784057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.695236  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.412228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.697507  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:23.697541  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:23.699140  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.345167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.699193  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (4.093337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.701921  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:23.701954  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:23.703362  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (3.822197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.703676  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.396509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.706870  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:23.707568  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:23.708950  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (5.286153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.709265  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.430273ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.712131  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:23.712180  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:23.713384  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (4.108175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.713909  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.416236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.716044  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:23.716081  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:23.717261  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (3.54068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.717761  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.401123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.720484  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:23.720558  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:23.721722  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (4.065273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.722390  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.539216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.724876  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:23.724917  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:23.728133  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.211372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.728549  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (6.424386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.755011  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0: (26.101842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.762647  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (2.217492ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.767424  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (4.372161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.770205  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (1.063312ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.772755  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (984.974µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.775243  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (883.362µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.777924  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (1.024678ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.780636  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (1.043524ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.783337  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (1.104602ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.786772  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (1.833252ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.789481  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (1.100786ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.792107  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (972.309µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.795032  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.254746ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.797758  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (1.022197ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.800609  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.006562ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.803478  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (1.215056ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.806106  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (1.067015ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.809110  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (1.376782ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.811742  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (973.45µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.814486  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (1.102784ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.817060  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (964.084µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.819753  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (1.118584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.822740  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (1.307225ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.825318  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (1.035363ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.828136  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.194818ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.830878  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (1.118887ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.833802  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.307542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.837401  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (2.028097ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.840089  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.079551ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.842673  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (934.408µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.845410  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.05607ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.848654  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.695592ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.851375  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.098361ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.854008  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.028761ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.856562  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (956.98µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.859085  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.00234ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.863375  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.147954ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.865801  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (906.466µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.868383  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.046072ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.871840  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.777072ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.874638  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.187099ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.877639  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.430499ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.880136  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (912.999µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.882791  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.023319ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.885394  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (963.163µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.888042  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.022033ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.890571  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (959.957µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.893045  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (936.366µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.895644  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (952.603µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.898309  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.003009ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.900898  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (955.445µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.903369  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (886.519µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.905951  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (982.385µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.908624  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0: (1.026034ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.911128  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (964.486µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.913775  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.088985ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.916189  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.927154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.916389  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0
I0210 16:46:23.916413  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0
I0210 16:46:23.916593  123305 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0", node "node1"
I0210 16:46:23.916615  123305 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0210 16:46:23.916656  123305 factory.go:733] Attempting to bind rpod-0 to node1
I0210 16:46:23.918465  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.732277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.918706  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1
I0210 16:46:23.918748  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1
I0210 16:46:23.919440  123305 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1", node "node1"
I0210 16:46:23.919513  123305 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0210 16:46:23.919540  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0/binding: (2.658297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.919564  123305 factory.go:733] Attempting to bind rpod-1 to node1
I0210 16:46:23.919829  123305 scheduler.go:571] pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0210 16:46:23.922096  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1/binding: (1.970614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:23.922213  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.102375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:23.922345  123305 scheduler.go:571] pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0210 16:46:23.924383  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.550588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.022018  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0: (2.766473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.124843  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (1.900033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.125262  123305 preemption_test.go:561] Creating the preemptor pod...
I0210 16:46:24.127767  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.212704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.127957  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:24.128024  123305 preemption_test.go:567] Creating additional pods...
I0210 16:46:24.128026  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:24.128177  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.128228  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.130573  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (2.158645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:24.130922  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/status: (2.240627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34456]
I0210 16:46:24.131576  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.957526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.132071  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.679298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34458]
I0210 16:46:24.133318  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.82797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34456]
I0210 16:46:24.133593  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.133853  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.520514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.135825  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.532523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.136150  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/status: (2.198578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34456]
I0210 16:46:24.137995  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.610093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.140051  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.654671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.140934  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (4.274736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:24.141232  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:24.141252  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:24.141380  123305 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod", node "node1"
I0210 16:46:24.141399  123305 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0210 16:46:24.141438  123305 factory.go:733] Attempting to bind preemptor-pod to node1
I0210 16:46:24.141467  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-4
I0210 16:46:24.141537  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-4
I0210 16:46:24.141704  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.141802  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.143336  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/binding: (1.41124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34460]
I0210 16:46:24.143409  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.910952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.143418  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.064068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34440]
I0210 16:46:24.143597  123305 scheduler.go:571] pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0210 16:46:24.144650  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (2.426553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0210 16:46:24.145823  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4/status: (3.49527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34464]
I0210 16:46:24.146958  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.663093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.147050  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.823364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34460]
I0210 16:46:24.148562  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (2.189249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34464]
I0210 16:46:24.148855  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.149032  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-3
I0210 16:46:24.149049  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-3
I0210 16:46:24.149127  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.149178  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.149728  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.344304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.149830  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.156739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34460]
I0210 16:46:24.151734  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3/status: (2.368883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34464]
I0210 16:46:24.151797  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (2.129179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0210 16:46:24.152439  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.752698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.153062  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.040131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34460]
I0210 16:46:24.166697  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.581707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.169459  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.29453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.172187  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.239378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.176760  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.587981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.179364  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.112007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.183279  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.428142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.186590  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (34.397416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0210 16:46:24.187030  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.228101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34422]
I0210 16:46:24.187275  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.188513  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-6
I0210 16:46:24.188526  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-6
I0210 16:46:24.188637  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.188693  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.190278  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.555895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0210 16:46:24.193036  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.440348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34468]
I0210 16:46:24.193552  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6/status: (3.179002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34464]
I0210 16:46:24.194081  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.267699ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0210 16:46:24.194147  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (3.988097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34466]
I0210 16:46:24.196075  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.543044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0210 16:46:24.196352  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (2.319485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34464]
I0210 16:46:24.197765  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.198014  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14
I0210 16:46:24.198055  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14
I0210 16:46:24.198175  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.198237  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.201755  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0210 16:46:24.202817  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14/status: (3.657756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34464]
I0210 16:46:24.203263  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (5.705245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0210 16:46:24.205691  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (2.347686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34464]
I0210 16:46:24.205972  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.206012  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.164933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34462]
I0210 16:46:24.206090  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13
I0210 16:46:24.206099  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13
I0210 16:46:24.206189  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.206224  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.208517  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.427499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34474]
I0210 16:46:24.209669  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (2.575037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34472]
I0210 16:46:24.210326  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.87844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34470]
I0210 16:46:24.211593  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13/status: (5.115369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34464]
I0210 16:46:24.215618  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.223501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34472]
I0210 16:46:24.215765  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (2.839587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34464]
I0210 16:46:24.216117  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.216259  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:24.216270  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:24.216353  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.216394  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.216788  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (17.67566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34468]
I0210 16:46:24.218259  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.709007ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34472]
I0210 16:46:24.218574  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (1.108595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34468]
I0210 16:46:24.219361  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.98ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34478]
I0210 16:46:24.220786  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20/status: (3.642477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34474]
I0210 16:46:24.222537  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (1.235228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34472]
I0210 16:46:24.223301  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.223465  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:24.223480  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:24.223606  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.223645  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.223841  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.075461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34476]
I0210 16:46:24.225945  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.823952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34482]
I0210 16:46:24.225972  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.649903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34480]
I0210 16:46:24.226538  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (2.392165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34476]
I0210 16:46:24.226558  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22/status: (2.55969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34472]
I0210 16:46:24.229285  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (2.265036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34472]
I0210 16:46:24.230112  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.230249  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:24.230260  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:24.230333  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.230378  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.230839  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (4.436094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34482]
I0210 16:46:24.233477  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.139448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34482]
I0210 16:46:24.233477  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (2.374565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34480]
I0210 16:46:24.233946  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20/status: (2.824915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34472]
I0210 16:46:24.234880  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-20.15820e848c037e25: (3.682476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0210 16:46:24.237758  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.036761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34482]
I0210 16:46:24.238944  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (2.654675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0210 16:46:24.239220  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.239403  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:24.239424  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:24.239628  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.239719  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.240992  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.457186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34482]
I0210 16:46:24.242448  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (2.472558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0210 16:46:24.242457  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26/status: (1.825268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34480]
I0210 16:46:24.243898  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.930778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34482]
I0210 16:46:24.244061  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.096493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34486]
I0210 16:46:24.244659  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (1.217297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34488]
I0210 16:46:24.244891  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.245064  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:24.245257  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:24.245384  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.245448  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.246065  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.501467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34482]
I0210 16:46:24.248234  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.712479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34482]
I0210 16:46:24.248544  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.583912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0210 16:46:24.248999  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28/status: (3.174381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34488]
I0210 16:46:24.250858  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (4.505642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34490]
I0210 16:46:24.251134  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.45966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34482]
I0210 16:46:24.251681  123305 cacher.go:633] cacher (*core.Pod): 1 objects queued in incoming channel.
I0210 16:46:24.253732  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.082799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34488]
I0210 16:46:24.256406  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (3.501609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0210 16:46:24.256841  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.257319  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:24.257388  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:24.257644  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.257720  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.258692  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.951541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34488]
I0210 16:46:24.262184  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (4.016693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0210 16:46:24.262528  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.47334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34494]
I0210 16:46:24.263070  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-26.15820e848d675b81: (3.229532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34488]
I0210 16:46:24.263575  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26/status: (3.572451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34492]
I0210 16:46:24.265658  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.515326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34494]
I0210 16:46:24.265994  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (1.592856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34488]
I0210 16:46:24.266329  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.266461  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:24.266476  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:24.266574  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.266630  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.268254  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.005276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34494]
I0210 16:46:24.269002  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.073375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34496]
I0210 16:46:24.270812  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.992073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34494]
I0210 16:46:24.270964  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.227234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34498]
I0210 16:46:24.273647  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.937307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34494]
I0210 16:46:24.273960  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34/status: (5.368272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0210 16:46:24.276333  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.23055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34494]
I0210 16:46:24.276676  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (2.265305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34484]
I0210 16:46:24.277332  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.277542  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:24.277597  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:24.277763  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.277845  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.279338  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.768906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34494]
I0210 16:46:24.281042  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (2.392944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34500]
I0210 16:46:24.281433  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37/status: (3.307574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34496]
I0210 16:46:24.283351  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.772608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I0210 16:46:24.283876  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (4.022983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34494]
I0210 16:46:24.285241  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.696821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34496]
I0210 16:46:24.285519  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.285916  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.56239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34502]
I0210 16:46:24.286118  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:24.286132  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:24.286272  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.286312  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.288827  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.269035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34496]
I0210 16:46:24.289307  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40/status: (2.534323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0210 16:46:24.289633  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (2.802772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34500]
I0210 16:46:24.290664  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.79502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34506]
I0210 16:46:24.291337  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.255264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34500]
I0210 16:46:24.291392  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.591736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34496]
I0210 16:46:24.292088  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.292328  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:24.292365  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:24.292507  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.294019  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.248797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0210 16:46:24.294525  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.294747  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.69146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34506]
I0210 16:46:24.298187  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37/status: (2.927992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34510]
I0210 16:46:24.298740  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.428168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34506]
I0210 16:46:24.298977  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-37.15820e848fad01b0: (5.478825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34508]
I0210 16:46:24.299740  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.071389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34510]
I0210 16:46:24.299986  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.300229  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:24.300287  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:24.300482  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.300549  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.301088  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.767083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34506]
I0210 16:46:24.303097  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46/status: (1.837088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34510]
I0210 16:46:24.303370  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (2.225364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0210 16:46:24.304144  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.585531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34506]
I0210 16:46:24.304675  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.230258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34510]
I0210 16:46:24.304987  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.305198  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:24.305213  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:24.305290  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.305334  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.307251  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48/status: (1.692849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0210 16:46:24.307325  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.358995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0210 16:46:24.308031  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.824647ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34514]
I0210 16:46:24.308773  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.01279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34512]
I0210 16:46:24.309066  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.309230  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:24.309245  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:24.309342  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.309959  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.311427  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.68622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34514]
I0210 16:46:24.311897  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46/status: (1.633469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0210 16:46:24.312744  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-46.15820e849107930b: (2.078507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34516]
I0210 16:46:24.313578  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.222729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34504]
I0210 16:46:24.313845  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.313987  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:24.314003  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:24.314069  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.314114  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.317042  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.819942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34514]
I0210 16:46:24.317681  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48/status: (3.310901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34516]
I0210 16:46:24.319188  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-48.15820e8491509f0b: (2.2559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34518]
I0210 16:46:24.319796  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.497119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34516]
I0210 16:46:24.320075  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.320261  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:24.320280  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:24.320358  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.320406  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.322418  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.762307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34514]
I0210 16:46:24.322992  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (1.875344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34518]
I0210 16:46:24.323317  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49/status: (1.820858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34520]
I0210 16:46:24.338948  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (15.192428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34518]
I0210 16:46:24.339434  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.339805  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:24.339829  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:24.339976  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.340032  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.342255  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.589728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34514]
I0210 16:46:24.342485  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.619981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.343954  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47/status: (3.253409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34518]
I0210 16:46:24.346259  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.188835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.346609  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.346793  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:24.346811  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:24.346917  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.346980  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.348407  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (1.175028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34514]
I0210 16:46:24.349109  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49/status: (1.887473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.351271  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (1.733231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.351418  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-49.15820e8492369a4d: (3.304316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34524]
I0210 16:46:24.351547  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.351733  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:24.351754  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:24.351871  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.351925  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.353643  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.519661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.354291  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47/status: (2.1339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34514]
I0210 16:46:24.355346  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-47.15820e84936209e1: (2.447113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34526]
I0210 16:46:24.355850  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.119827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34514]
I0210 16:46:24.356204  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.356421  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:24.356440  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:24.356607  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.356659  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.359317  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.913491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.359481  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.024761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34528]
I0210 16:46:24.359740  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45/status: (2.811444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34526]
I0210 16:46:24.362002  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.538904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34528]
I0210 16:46:24.362345  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.362605  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:24.362629  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:24.362738  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.362801  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.364650  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.498139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.365954  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44/status: (2.888953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34528]
I0210 16:46:24.367576  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.612366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0210 16:46:24.367714  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.425303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34528]
I0210 16:46:24.368525  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.368672  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:24.368716  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:24.368858  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.368916  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.371338  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45/status: (2.150396ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0210 16:46:24.371999  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.515324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.373676  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.755947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0210 16:46:24.373849  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-45.15820e84945fc2c7: (3.226955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34532]
I0210 16:46:24.374064  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.374265  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:24.374311  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:24.374450  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.375201  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.377583  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (2.016228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0210 16:46:24.378532  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44/status: (2.86549ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.380208  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.307059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.381730  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-44.15820e8494bd6200: (5.436056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34534]
I0210 16:46:24.382008  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.382371  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:24.382398  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:24.382543  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.382608  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.384327  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.430373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.386314  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43/status: (3.433269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0210 16:46:24.389001  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (2.095436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0210 16:46:24.389224  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.389382  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:24.389399  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:24.389414  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.980997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.389569  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.389643  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.391720  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.265677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.392129  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40/status: (2.246365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0210 16:46:24.394058  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-40.15820e84902e5d9a: (3.594946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34536]
I0210 16:46:24.394696  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.312844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34530]
I0210 16:46:24.394919  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.395064  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:24.395102  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:24.395237  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.395281  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.397258  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.748185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34536]
I0210 16:46:24.397955  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43/status: (2.434648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.399099  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-43.15820e8495eb7ec2: (3.07262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34538]
I0210 16:46:24.399447  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.068948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34522]
I0210 16:46:24.399715  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.399864  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:24.399880  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:24.399960  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.400020  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.401540  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (985.141µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34536]
I0210 16:46:24.402372  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.704559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0210 16:46:24.402613  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42/status: (2.036417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34538]
I0210 16:46:24.403373  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.11342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34536]
I0210 16:46:24.404078  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.091295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34538]
I0210 16:46:24.404339  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.404480  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:24.404515  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:24.404611  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.404659  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.405688  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:24.405989  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.076286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0210 16:46:24.406631  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.353374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0210 16:46:24.406815  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:24.407485  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41/status: (2.539268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34536]
I0210 16:46:24.409048  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:24.409068  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.186654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0210 16:46:24.409356  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.409521  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:24.409536  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:24.409646  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.409699  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.411567  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.630709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0210 16:46:24.412090  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:24.412327  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:24.413025  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42/status: (3.06317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0210 16:46:24.413523  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-42.15820e8496f53bdf: (2.466336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0210 16:46:24.414564  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.130281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34542]
I0210 16:46:24.414808  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.414973  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:24.414994  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:24.415090  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.415145  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.416655  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.262883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0210 16:46:24.417087  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41/status: (1.641498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0210 16:46:24.418679  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-41.15820e84973c3092: (2.617435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34546]
I0210 16:46:24.418731  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.258905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34544]
I0210 16:46:24.419013  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.420236  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:24.420303  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:24.420408  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.420462  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.422037  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.235707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0210 16:46:24.422635  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39/status: (1.853617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34546]
I0210 16:46:24.422929  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.853349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34548]
I0210 16:46:24.424052  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.115616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34546]
I0210 16:46:24.424371  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.424568  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:24.424607  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:24.424759  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.424820  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.427562  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.118687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34550]
I0210 16:46:24.429073  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.645317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0210 16:46:24.432288  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38/status: (7.182855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34548]
I0210 16:46:24.434777  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.988153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0210 16:46:24.436685  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.436938  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:24.436956  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:24.437032  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.437227  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.439699  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.368273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0210 16:46:24.443300  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-39.15820e84982d4fab: (4.643909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34554]
I0210 16:46:24.444425  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39/status: (6.552845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34550]
I0210 16:46:24.446737  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.386074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34554]
I0210 16:46:24.447722  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.447868  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:24.447887  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:24.448005  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.448086  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.453296  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-38.15820e84986fd1b1: (3.560327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34558]
I0210 16:46:24.453336  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38/status: (4.399344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34554]
I0210 16:46:24.453612  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (4.259356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0210 16:46:24.455114  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.273672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34554]
I0210 16:46:24.455402  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.455599  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:24.455617  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:24.455739  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.455801  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.457404  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.292894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34558]
I0210 16:46:24.457904  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34/status: (1.867737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0210 16:46:24.459530  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.289954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0210 16:46:24.459562  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-34.15820e848f01d745: (2.585145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34560]
I0210 16:46:24.459794  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.459987  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:24.460006  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:24.460109  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.460159  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.462457  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (2.030061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34558]
I0210 16:46:24.463529  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.786334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34562]
I0210 16:46:24.463781  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36/status: (3.307663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34540]
I0210 16:46:24.465721  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.231919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34562]
I0210 16:46:24.465991  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.466178  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:24.466194  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:24.466306  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.466361  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.467857  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.235092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34562]
I0210 16:46:24.468659  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.709357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34564]
I0210 16:46:24.469004  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35/status: (2.335166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34558]
I0210 16:46:24.471236  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.184785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34564]
I0210 16:46:24.471525  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.471684  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:24.471701  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:24.471818  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.471862  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.474066  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.484627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34562]
I0210 16:46:24.474298  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36/status: (2.1867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34564]
I0210 16:46:24.474936  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-36.15820e849a8b059a: (2.178996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34566]
I0210 16:46:24.475822  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.13193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34564]
I0210 16:46:24.476118  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.476331  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:24.476353  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:24.476475  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.476552  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.478291  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.395192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34562]
I0210 16:46:24.478669  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35/status: (1.846489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34566]
I0210 16:46:24.479465  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-35.15820e849ae99773: (2.174335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0210 16:46:24.480365  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.260326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34566]
I0210 16:46:24.480721  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.480881  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:24.480898  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:24.480988  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.481037  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.482472  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (952.086µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0210 16:46:24.483437  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33/status: (1.969964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34562]
I0210 16:46:24.484276  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.52056ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0210 16:46:24.485219  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.011741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34562]
I0210 16:46:24.485548  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.485717  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:24.485730  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:24.485820  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.485867  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.487564  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.379212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34570]
I0210 16:46:24.489022  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32/status: (2.914309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0210 16:46:24.489372  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.973739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34572]
I0210 16:46:24.490645  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.109199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34568]
I0210 16:46:24.490947  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.491123  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:24.491142  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:24.491269  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.491350  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.493416  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.745017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34572]
I0210 16:46:24.496004  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-33.15820e849bc98b86: (3.522182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34574]
I0210 16:46:24.496010  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33/status: (4.30186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34570]
I0210 16:46:24.498188  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.72238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34574]
I0210 16:46:24.498416  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.498589  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:24.498607  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:24.498699  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.498745  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.500126  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.147085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34574]
I0210 16:46:24.501813  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-32.15820e849c13554d: (2.466099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34576]
I0210 16:46:24.501896  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32/status: (2.897458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34572]
I0210 16:46:24.503434  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.092801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34576]
I0210 16:46:24.503721  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.503892  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:24.503910  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:24.504568  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.504622  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.506192  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.377624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34574]
I0210 16:46:24.506563  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.899231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34576]
I0210 16:46:24.507379  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.185093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34578]
I0210 16:46:24.507439  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31/status: (2.319436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34580]
I0210 16:46:24.508007  123305 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0210 16:46:24.508995  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.191765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34576]
I0210 16:46:24.509249  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.509261  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (1.059824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34574]
I0210 16:46:24.509417  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:24.509483  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:24.509652  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.509695  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.510685  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (1.06182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34574]
I0210 16:46:24.511119  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.239555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34576]
I0210 16:46:24.511800  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.590937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34584]
I0210 16:46:24.512214  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (1.166829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34574]
I0210 16:46:24.512590  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30/status: (2.485886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34582]
I0210 16:46:24.513939  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.0132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34582]
I0210 16:46:24.513953  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (1.246465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34584]
I0210 16:46:24.514241  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.514399  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:24.514414  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:24.514514  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.514579  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.515431  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (1.07242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34584]
I0210 16:46:24.516425  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.624067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34576]
I0210 16:46:24.517248  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31/status: (2.191946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34586]
I0210 16:46:24.517837  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (1.231757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34584]
I0210 16:46:24.519399  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.632776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34586]
I0210 16:46:24.519681  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.519921  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (1.203083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34584]
I0210 16:46:24.520024  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:24.520068  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:24.520226  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.520310  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.520581  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-31.15820e849d317d47: (4.759834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.522229  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (1.684472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34586]
I0210 16:46:24.523203  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30/status: (2.510498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34576]
I0210 16:46:24.523622  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.210135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34586]
I0210 16:46:24.524032  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (1.052345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34590]
I0210 16:46:24.524395  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-30.15820e849d7eecdb: (3.142995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.525149  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.015226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34576]
I0210 16:46:24.525421  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.525451  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.021471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34590]
I0210 16:46:24.525603  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:24.525618  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:24.525703  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.525757  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.527273  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.336718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34592]
I0210 16:46:24.527366  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (1.423926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.528098  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.894093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34596]
I0210 16:46:24.528384  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29/status: (2.27134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34594]
I0210 16:46:24.529427  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.044493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.530110  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.256304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34594]
I0210 16:46:24.530642  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.530821  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:24.530838  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:24.530900  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.530946  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.530973  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (1.137875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.532286  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.049773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.532646  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (1.214198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34598]
I0210 16:46:24.533553  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27/status: (2.265319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34594]
I0210 16:46:24.534158  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.852502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34592]
I0210 16:46:24.535059  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (1.419649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34598]
I0210 16:46:24.535175  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (845.778µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34594]
I0210 16:46:24.535663  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.536261  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:24.536281  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:24.536393  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.536448  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.537032  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.500748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34592]
I0210 16:46:24.537793  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.048632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34594]
I0210 16:46:24.538389  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (967.694µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34592]
I0210 16:46:24.538683  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29/status: (1.937769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.539935  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (1.082811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34592]
I0210 16:46:24.540382  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.29372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.540431  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-29.15820e849e73f50d: (3.23028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34600]
I0210 16:46:24.540655  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.540781  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:24.540793  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:24.540886  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.540950  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.541678  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (1.210169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34592]
I0210 16:46:24.542292  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.181465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.544047  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27/status: (2.787267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34594]
I0210 16:46:24.544414  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (2.209216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34592]
I0210 16:46:24.545668  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.12954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34594]
I0210 16:46:24.545917  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.546108  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:24.546146  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:24.546291  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.546341  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.546770  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (2.016256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34592]
I0210 16:46:24.546913  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-27.15820e849ec32409: (2.370258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.547934  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (1.285908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34594]
I0210 16:46:24.549143  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22/status: (2.119589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.549252  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.691216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34592]
I0210 16:46:24.550936  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-22.15820e848c721ffe: (3.00685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.551021  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (1.36029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.551179  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (1.467627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34594]
I0210 16:46:24.551328  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.551477  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:24.551514  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:24.551607  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.551641  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.552699  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.012172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.553667  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.510182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0210 16:46:24.554440  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25/status: (2.580307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.555138  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.119486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.555232  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (3.061614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34604]
I0210 16:46:24.555941  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.039268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.556192  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.556357  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:24.556374  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:24.556516  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.556562  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.556753  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.14402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.558009  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.027601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0210 16:46:24.558680  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.304504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34608]
I0210 16:46:24.558763  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24/status: (1.985889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34588]
I0210 16:46:24.558697  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (1.416239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.560714  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.491718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.560799  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.202945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34606]
I0210 16:46:24.561029  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.561222  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:24.561277  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:24.561446  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.561525  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.562569  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.407827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.564273  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (2.343376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34608]
I0210 16:46:24.564895  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.81351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.564932  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25/status: (2.362484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34612]
I0210 16:46:24.565981  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-25.15820e849ffef9f5: (3.672112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34610]
I0210 16:46:24.566266  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (974.153µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.566521  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.117917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34608]
I0210 16:46:24.567877  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.568039  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:24.568059  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:24.568138  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.568235  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.569449  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (2.189475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.570642  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.628363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34610]
I0210 16:46:24.570854  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24/status: (1.798429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34608]
I0210 16:46:24.571710  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.720926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.571910  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-24.15820e84a04a0ca5: (2.832816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34614]
I0210 16:46:24.572687  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.074213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34608]
I0210 16:46:24.573049  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.573202  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:24.573218  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:24.573304  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.09922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34602]
I0210 16:46:24.573302  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.573348  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.574681  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.112153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34614]
I0210 16:46:24.575182  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.16161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34616]
I0210 16:46:24.575905  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23/status: (2.28189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34610]
I0210 16:46:24.576345  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.382379ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34618]
I0210 16:46:24.576730  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.033162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34616]
I0210 16:46:24.577640  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.155591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34610]
I0210 16:46:24.577842  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.578007  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:24.578020  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:24.578143  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.578225  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.579896  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.510812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34610]
I0210 16:46:24.580842  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.623312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34620]
I0210 16:46:24.581017  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21/status: (1.958339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34614]
I0210 16:46:24.581361  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (3.630007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34618]
I0210 16:46:24.583064  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.134136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34620]
I0210 16:46:24.583113  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.076909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34610]
I0210 16:46:24.583341  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.583523  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:24.583539  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:24.583602  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.583636  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.584652  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.180339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34610]
I0210 16:46:24.586218  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (2.08668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34622]
I0210 16:46:24.586304  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23/status: (2.427024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34620]
I0210 16:46:24.587327  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-23.15820e84a14a2d7c: (2.954211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34624]
I0210 16:46:24.587547  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (2.001436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34610]
I0210 16:46:24.588696  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.621524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34620]
I0210 16:46:24.589021  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.589182  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:24.589201  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:24.589330  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.589378  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.589437  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.393616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34624]
I0210 16:46:24.591402  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.136821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34622]
I0210 16:46:24.592239  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21/status: (2.667046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34620]
I0210 16:46:24.592532  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (2.239163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34624]
I0210 16:46:24.593323  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.516865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34622]
I0210 16:46:24.593877  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-21.15820e84a1944349: (3.108157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34626]
I0210 16:46:24.595325  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.828142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34624]
I0210 16:46:24.595740  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.595783  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.082315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34626]
I0210 16:46:24.595972  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13
I0210 16:46:24.595991  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13
I0210 16:46:24.596079  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.596158  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.597707  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (1.259387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34624]
I0210 16:46:24.598031  123305 backoff_utils.go:79] Backing off 2s
I0210 16:46:24.598320  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.311299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34628]
I0210 16:46:24.598908  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13/status: (2.460415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34620]
I0210 16:46:24.599447  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-13.15820e848b6853d1: (2.27754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34630]
I0210 16:46:24.599804  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.145748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34628]
I0210 16:46:24.600698  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (1.325999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34620]
I0210 16:46:24.600951  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.601536  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.22686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34630]
I0210 16:46:24.601683  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:24.601701  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:24.601807  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.601852  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.603686  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (1.321841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.603801  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.693156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34624]
I0210 16:46:24.604206  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.563171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34634]
I0210 16:46:24.605392  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.243263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34624]
I0210 16:46:24.605757  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19/status: (3.668348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34620]
I0210 16:46:24.608123  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (1.905074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34620]
I0210 16:46:24.608138  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (2.259981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34634]
I0210 16:46:24.608992  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.609063  123305 preemption_test.go:598] Cleaning up all pods...
I0210 16:46:24.609280  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14
I0210 16:46:24.609301  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14
I0210 16:46:24.609436  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.609519  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.612448  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (2.119183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.614419  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14/status: (4.672312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.614864  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (5.60106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34620]
I0210 16:46:24.615120  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-14.15820e848aee68e9: (4.750378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34638]
I0210 16:46:24.616615  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (1.068615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.616894  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.617037  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:24.617054  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:24.617818  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.617908  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.619515  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (1.290343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.620056  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (4.820054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34620]
I0210 16:46:24.620848  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19/status: (2.622254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.622339  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (1.089836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.623318  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.623566  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:24.623599  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:24.623707  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.623753  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.624519  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-19.15820e84a2fd1ac5: (3.479603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34640]
I0210 16:46:24.625209  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (1.167526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.625922  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17/status: (1.833925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.626139  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (5.681388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34620]
I0210 16:46:24.626649  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.608673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34640]
I0210 16:46:24.628852  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (2.228612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.629191  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.629360  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15
I0210 16:46:24.629376  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15
I0210 16:46:24.629480  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.629573  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.630858  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.078915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.631445  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (4.689599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.631663  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.489375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34642]
I0210 16:46:24.632354  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15/status: (2.574425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34640]
I0210 16:46:24.634015  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.095262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34642]
I0210 16:46:24.634235  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.634379  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12
I0210 16:46:24.634390  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12
I0210 16:46:24.634515  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.634584  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.636925  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.948836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.637630  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (5.201292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.637995  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (2.604808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34644]
I0210 16:46:24.638482  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12/status: (3.639982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34642]
I0210 16:46:24.640290  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (1.14155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34642]
I0210 16:46:24.640547  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.640705  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11
I0210 16:46:24.640719  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11
I0210 16:46:24.640813  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.640869  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.642916  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.764831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34642]
I0210 16:46:24.643620  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.140732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34646]
I0210 16:46:24.643756  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (4.602633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.644190  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11/status: (3.052718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.646764  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.080456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.647120  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.647324  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12
I0210 16:46:24.647347  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12
I0210 16:46:24.648088  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.648201  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.650263  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12/status: (1.693185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34642]
I0210 16:46:24.650719  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (6.576801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.651116  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (2.657853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.652056  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-12.15820e84a4f0413e: (2.768844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34648]
I0210 16:46:24.652584  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (1.843171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34642]
I0210 16:46:24.653248  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.653414  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9
I0210 16:46:24.653661  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9
I0210 16:46:24.653780  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.653831  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.655216  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (4.127307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.655360  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.333614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34648]
I0210 16:46:24.656297  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9/status: (2.166423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.656302  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.63429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34650]
I0210 16:46:24.658399  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.642658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.658657  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.659095  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8
I0210 16:46:24.659194  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-8
I0210 16:46:24.659334  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9
I0210 16:46:24.659349  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9
I0210 16:46:24.659471  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.659573  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.660774  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (5.146611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.661422  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.662607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.662861  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9/status: (2.819665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34648]
I0210 16:46:24.664878  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.847387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.665775  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.650912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34648]
I0210 16:46:24.666683  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.666865  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15
I0210 16:46:24.666905  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15
I0210 16:46:24.667032  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.667177  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (5.868405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34654]
I0210 16:46:24.667249  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.673343  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (3.937433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34648]
I0210 16:46:24.673789  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-9.15820e84a616459b: (6.918923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34632]
I0210 16:46:24.673917  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15/status: (5.999744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34656]
I0210 16:46:24.680184  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (5.223608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34656]
I0210 16:46:24.686936  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-15.15820e84a4a3fed4: (7.586989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34648]
I0210 16:46:24.694559  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.695056  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-10
I0210 16:46:24.695102  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-10
I0210 16:46:24.695203  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11
I0210 16:46:24.695210  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11
I0210 16:46:24.695540  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.695595  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.698802  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (31.244693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.702527  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (4.401326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34658]
I0210 16:46:24.704776  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11/status: (8.786221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34656]
I0210 16:46:24.704805  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (5.640136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.706056  123305 backoff_utils.go:79] Backing off 2s
I0210 16:46:24.709083  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (2.5796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.709617  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.709794  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:24.709809  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:24.709951  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.710021  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.713517  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-11.15820e84a55053e4: (9.090444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34658]
I0210 16:46:24.716139  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17/status: (5.702218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.716753  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (6.167863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34656]
I0210 16:46:24.717710  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (15.108467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.718123  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (1.456063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.718482  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.718587  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-17.15820e84a44b4e28: (4.19883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34658]
I0210 16:46:24.718767  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16
I0210 16:46:24.718785  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16
I0210 16:46:24.718923  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.718978  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.721940  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.094925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34660]
I0210 16:46:24.725858  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (7.731672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.728611  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16/status: (8.748133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34656]
I0210 16:46:24.750752  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (31.520259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.752804  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (23.037356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34656]
I0210 16:46:24.753300  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (25.933037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34636]
I0210 16:46:24.754930  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.756308  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:24.756380  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:24.756660  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.756809  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.759668  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (2.061938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34660]
I0210 16:46:24.760668  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (6.844604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.760788  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.46316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34662]
I0210 16:46:24.760321  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18/status: (3.086046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34656]
I0210 16:46:24.766446  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (5.275431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34662]
I0210 16:46:24.767421  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (5.93906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34656]
I0210 16:46:24.767744  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.775439  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (8.544549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34660]
I0210 16:46:24.778811  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:24.778837  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:24.778965  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:24.779017  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:24.784856  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (1.397752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.784973  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18/status: (4.010636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.786729  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-18.15820e84ac39537e: (2.867141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34664]
I0210 16:46:24.788293  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (12.063209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34656]
I0210 16:46:24.792259  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (5.829354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.792590  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:24.792987  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:24.793015  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:24.795522  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (6.766522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34664]
I0210 16:46:24.795624  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.172091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.804445  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:24.804515  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:24.815651  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (19.789739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34664]
I0210 16:46:24.817144  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (4.188498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.819416  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:24.819482  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-20
I0210 16:46:24.820714  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (4.55378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34664]
I0210 16:46:24.822351  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.595667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.824620  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:24.824657  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:24.826749  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.830209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.826835  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (5.583678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34664]
I0210 16:46:24.830018  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:24.830060  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:24.830958  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (3.799988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.832033  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.66143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.833940  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:24.833983  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:24.835434  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (4.060187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.835793  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.46454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.838704  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:24.838812  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:24.840016  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (3.901999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.841210  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.015481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.845115  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:24.845156  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:24.846075  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (5.693497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.848290  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.739581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.849413  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:24.849520  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:24.850923  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (4.516484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.851358  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.363658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.853862  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:24.853901  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:24.855246  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (4.032779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.855788  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.512223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.858071  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:24.858117  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:24.859423  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (3.716614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.859929  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.503515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.862273  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:24.862318  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:24.863804  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (4.037234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.863983  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.385753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.866511  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:24.866547  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:24.867736  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (3.606276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.868283  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.492438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.870451  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:24.870526  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:24.871838  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (3.703745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.872117  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.353303ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.874419  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:24.874462  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:24.875752  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (3.538317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.876019  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.315378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.878379  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:24.878422  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:24.879628  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (3.588656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.880010  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.29642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.882279  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:24.882325  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:24.883563  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (3.636201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.883941  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.368413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.886050  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:24.886093  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:24.887639  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.310706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.887725  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (3.869621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.890390  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:24.890431  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:24.891668  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (3.629906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.891908  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.229955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.894381  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:24.894420  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:24.895910  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (3.948431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.896775  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.115165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.901949  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:24.902024  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:24.904203  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.776632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.904456  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (7.697264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.907892  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:24.907974  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:24.913618  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (5.364263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.914617  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (9.409045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.919366  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:24.919398  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:24.920788  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (4.834589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.922201  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.465597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.923861  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:24.923911  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:24.925344  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (3.949632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.925633  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.465729ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.928159  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:24.928215  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:24.929333  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (3.675376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.930125  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.663008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.931918  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:24.931950  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:24.933603  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (3.896755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.934074  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.839653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.936405  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:24.936443  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:24.937580  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (3.598652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.937937  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.272128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.940245  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:24.940294  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:24.941425  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (3.52822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.942319  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.298617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.944603  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:24.944927  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:24.945921  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (4.083404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.946680  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.674959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.948905  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:24.948963  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:24.950283  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (3.634989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.950531  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.320003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.953016  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:24.953107  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:24.954407  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (3.837693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.954847  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.454982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.956994  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:24.957045  123305 scheduler.go:449] Skip schedule deleting pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:24.958516  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (3.781591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.958574  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.207784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:24.962419  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0: (3.639441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.963717  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (948.751µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.967755  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (3.668136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.970124  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (872.154µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.972561  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (907.178µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.974973  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (895.268µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.977474  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (912.957µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.979930  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (909.573µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.982414  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (952.274µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.984924  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (869.344µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.987270  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (841.011µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.989576  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (830.03µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.991785  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (765.878µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.994284  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (850.166µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.996788  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (955.01µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:24.999132  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (835.615µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.001586  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (900.316µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.003917  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (787.365µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.006245  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (827.118µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.008578  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (834.05µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.010910  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (802.349µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.013409  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (979.704µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.015867  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (858.585µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.018209  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-20: (817.877µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.020681  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (886.015µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.023024  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (821.008µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.025339  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (758.413µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.027798  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (895.067µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.030007  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (737.557µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.032311  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (791.884µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.034691  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (870.517µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.037070  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (866.221µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.039412  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (824.941µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.042018  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (855.619µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.044356  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (737.991µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.046635  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (795.124µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.049091  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (870.062µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.051429  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (787.525µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.053838  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (862.284µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.056112  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (824.6µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.058563  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (836.772µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.061015  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (937.886µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.063756  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (814.036µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.066131  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (821.246µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.068680  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (900.251µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.071026  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (802.402µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.073244  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (768.095µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.075670  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (902.497µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.078068  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (843.461µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.080395  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (784.677µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.082809  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (903.865µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.085215  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (864.42µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.087510  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (783.073µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.089859  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0: (878.937µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.092255  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (932.013µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.094847  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (969.579µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.097556  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.884381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.097701  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0
I0210 16:46:25.097719  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0
I0210 16:46:25.097896  123305 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0", node "node1"
I0210 16:46:25.097914  123305 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0", node "node1": all PVCs bound and nothing to do
I0210 16:46:25.097982  123305 factory.go:733] Attempting to bind rpod-0 to node1
I0210 16:46:25.099739  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.655944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.099843  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0/binding: (1.557158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.099912  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1
I0210 16:46:25.099924  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1
I0210 16:46:25.100035  123305 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1", node "node1"
I0210 16:46:25.100045  123305 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1", node "node1": all PVCs bound and nothing to do
I0210 16:46:25.100065  123305 scheduler.go:571] pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0210 16:46:25.100131  123305 factory.go:733] Attempting to bind rpod-1 to node1
I0210 16:46:25.101777  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.479111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.101861  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1/binding: (1.512684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.102076  123305 scheduler.go:571] pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0210 16:46:25.103787  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.422613ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.202182  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-0: (1.742238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.305273  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (2.042088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.305599  123305 preemption_test.go:561] Creating the preemptor pod...
I0210 16:46:25.308151  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.328248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.308378  123305 preemption_test.go:567] Creating additional pods...
I0210 16:46:25.308712  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:25.308723  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:25.311234  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.267563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.313881  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.098854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.316332  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.318271  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.862275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.320835  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.321577  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (4.779118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.326003  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.333476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.330769  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (11.560567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34652]
I0210 16:46:25.333549  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.235384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.335835  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.922476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.338160  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.968595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.340226  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.734985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.342231  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.655372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.344141  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.386954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.345910  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.432982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.347616  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.361679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.349411  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.47053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.354238  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/status: (3.663591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34694]
I0210 16:46:25.354301  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (4.541564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.356356  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.609111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34694]
I0210 16:46:25.356753  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (2.129053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.356990  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.360008  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.449029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34694]
I0210 16:46:25.361720  123305 cacher.go:633] cacher (*core.Pod): 2 objects queued in incoming channel.
I0210 16:46:25.363060  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/status: (4.960303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.363713  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.710813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34694]
I0210 16:46:25.367184  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.852809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.369460  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.739708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.370907  123305 wrap.go:47] DELETE /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/rpod-1: (6.692254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34694]
I0210 16:46:25.371685  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:25.371707  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod
I0210 16:46:25.371947  123305 scheduler_binder.go:269] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod", node "node1"
I0210 16:46:25.371995  123305 scheduler_binder.go:279] AssumePodVolumes for pod "preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0210 16:46:25.372120  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:25.372138  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18
I0210 16:46:25.372214  123305 factory.go:733] Attempting to bind preemptor-pod to node1
I0210 16:46:25.372258  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.372351  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.373456  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.760098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34694]
I0210 16:46:25.373789  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.321067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.375457  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18/status: (2.545691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34698]
I0210 16:46:25.376953  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.470689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34702]
I0210 16:46:25.377003  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (997.39µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34698]
I0210 16:46:25.377255  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.377558  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:25.377573  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17
I0210 16:46:25.377661  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.377706  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.378741  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod/binding: (5.742536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34696]
I0210 16:46:25.379635  123305 scheduler.go:571] pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0210 16:46:25.380190  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (2.291292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34700]
I0210 16:46:25.380373  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17/status: (2.489859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34698]
I0210 16:46:25.382433  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (8.399911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34694]
I0210 16:46:25.382785  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (1.790149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34700]
I0210 16:46:25.383133  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.737056ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34704]
I0210 16:46:25.383579  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.384302  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:25.384317  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:25.384401  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.384437  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.386028  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.456206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34700]
I0210 16:46:25.386622  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.972307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34694]
I0210 16:46:25.387390  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (2.07057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34706]
I0210 16:46:25.387953  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19/status: (2.35819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34696]
I0210 16:46:25.388082  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.507586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34700]
I0210 16:46:25.388351  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.38212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34694]
I0210 16:46:25.389511  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-18: (15.466473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.389771  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (1.479537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34696]
I0210 16:46:25.390064  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.390106  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.620223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34706]
I0210 16:46:25.390157  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.389171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34694]
I0210 16:46:25.390284  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:25.390318  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:25.390441  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.390630  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.392787  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.435981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34708]
I0210 16:46:25.392998  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21/status: (1.877307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.393304  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (2.416319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34694]
I0210 16:46:25.393353  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.76638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34696]
I0210 16:46:25.395132  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.433925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.395972  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.396389  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:25.396408  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:25.396480  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.396547  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.397790  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.00085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34708]
I0210 16:46:25.398660  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24/status: (1.882732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34666]
I0210 16:46:25.399035  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.985334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34710]
I0210 16:46:25.399531  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (5.453187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34696]
I0210 16:46:25.400562  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.094513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34710]
I0210 16:46:25.400804  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.400939  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:25.400966  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:25.401390  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.534359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34696]
I0210 16:46:25.402280  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.402377  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.403300  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.53708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34710]
I0210 16:46:25.404094  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.127291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34712]
I0210 16:46:25.405548  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-21.15820e84d200d032: (2.059587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34714]
I0210 16:46:25.405594  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.49194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34710]
I0210 16:46:25.405874  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:25.405941  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21/status: (2.707161ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34708]
I0210 16:46:25.406967  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:25.407412  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.357936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34714]
I0210 16:46:25.407977  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.095586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34708]
I0210 16:46:25.408178  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.408279  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:25.408287  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:25.408367  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.408399  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.409399  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:25.409513  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.501973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34714]
I0210 16:46:25.409987  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24/status: (1.20216ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34708]
I0210 16:46:25.410377  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.695523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34712]
I0210 16:46:25.411337  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.010647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34708]
I0210 16:46:25.411542  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.411664  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:25.411685  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:25.411763  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.411797  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.412224  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:25.412478  123305 reflector.go:248] k8s.io/client-go/informers/factory.go:132: forcing resync
I0210 16:46:25.412759  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.932675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34714]
I0210 16:46:25.413117  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (956.93µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34712]
I0210 16:46:25.413462  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-24.15820e84d25b2b58: (4.373849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34716]
I0210 16:46:25.415096  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.236232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34716]
I0210 16:46:25.415250  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23/status: (2.888786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34708]
I0210 16:46:25.415630  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.293943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34714]
I0210 16:46:25.416832  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.035246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34716]
I0210 16:46:25.417043  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.417182  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:25.417231  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24
I0210 16:46:25.417376  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.417395  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.423097ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34714]
I0210 16:46:25.417418  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.419556  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24/status: (1.551379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34718]
I0210 16:46:25.420334  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.54339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34712]
I0210 16:46:25.421533  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (1.065613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34718]
I0210 16:46:25.421547  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-24.15820e84d25b2b58: (3.324382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34720]
I0210 16:46:25.421794  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-24: (4.149734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34716]
I0210 16:46:25.421863  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.422051  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:25.422061  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:25.422137  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.422183  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.425029  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-23.15820e84d343eea6: (2.068531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34722]
I0210 16:46:25.425227  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (2.345244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34718]
I0210 16:46:25.425234  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (4.481468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34712]
I0210 16:46:25.425770  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23/status: (2.867197ms) 409 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34720]
I0210 16:46:25.427849  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.743181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34720]
I0210 16:46:25.427972  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.350276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34712]
I0210 16:46:25.428258  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.428432  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:25.428466  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19
I0210 16:46:25.428698  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.428809  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.430511  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.915219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34720]
I0210 16:46:25.430985  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19/status: (1.555592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34724]
I0210 16:46:25.431411  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (2.369895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34722]
I0210 16:46:25.432418  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-19.15820e84d1a26401: (2.440741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34726]
I0210 16:46:25.432839  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-19: (1.27479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34724]
I0210 16:46:25.433301  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.433309  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.505147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34722]
I0210 16:46:25.433430  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:25.433463  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:25.433596  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.433658  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.435207  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (927.548µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34728]
I0210 16:46:25.435753  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.900221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34726]
I0210 16:46:25.435768  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37/status: (1.617998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34720]
I0210 16:46:25.436449  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.251525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34730]
I0210 16:46:25.437834  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.375223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34728]
I0210 16:46:25.437888  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.686213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34726]
I0210 16:46:25.438152  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.438306  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:25.438322  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:25.438414  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.438464  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.439889  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.345263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34728]
I0210 16:46:25.440314  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.285459ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34734]
I0210 16:46:25.441015  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (2.369346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34730]
I0210 16:46:25.442077  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38/status: (3.204615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34732]
I0210 16:46:25.442112  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.779692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34728]
I0210 16:46:25.443825  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.093282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34734]
I0210 16:46:25.444059  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.444195  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:25.444206  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:25.444290  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.444333  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.445066  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (2.515022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34730]
I0210 16:46:25.446255  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.427089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34736]
I0210 16:46:25.447017  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41/status: (2.349373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34734]
I0210 16:46:25.448321  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.631189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34730]
I0210 16:46:25.448529  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.863551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34738]
I0210 16:46:25.449226  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.300278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34734]
I0210 16:46:25.449480  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.450570  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.521902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34730]
I0210 16:46:25.451015  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:25.451108  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:25.451228  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.451274  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.453252  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43/status: (1.691749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34736]
I0210 16:46:25.453785  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.5529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34742]
I0210 16:46:25.454715  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (3.717248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34734]
I0210 16:46:25.455090  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (2.999789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34740]
I0210 16:46:25.456731  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.302575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34736]
I0210 16:46:25.457005  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.457413  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.982743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34742]
I0210 16:46:25.457561  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:25.457582  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:25.457737  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.457846  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.459336  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.261573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34740]
I0210 16:46:25.459510  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods: (1.637758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34736]
I0210 16:46:25.459836  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.451941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34746]
I0210 16:46:25.461332  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46/status: (2.914916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34744]
I0210 16:46:25.462972  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.107748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34736]
I0210 16:46:25.463231  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.463456  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:25.463472  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:25.463566  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.463611  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.465283  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (996.068µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34740]
I0210 16:46:25.466385  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48/status: (2.520568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34736]
I0210 16:46:25.467032  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.787043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34748]
I0210 16:46:25.467989  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.163111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34736]
I0210 16:46:25.468316  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.468524  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:25.468541  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46
I0210 16:46:25.468631  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.468719  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.470372  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.352188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34740]
I0210 16:46:25.472208  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46/status: (3.126363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34748]
I0210 16:46:25.472243  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-46.15820e84d6027ded: (2.832636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34750]
I0210 16:46:25.473777  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-46: (1.027637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34750]
I0210 16:46:25.474119  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.474312  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:25.474335  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48
I0210 16:46:25.474433  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.474518  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.475830  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (1.102691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34750]
I0210 16:46:25.476574  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48/status: (1.827516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34740]
I0210 16:46:25.479033  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-48.15820e84d65a83b7: (2.878331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34750]
I0210 16:46:25.479728  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-48: (2.119441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34740]
I0210 16:46:25.480111  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.480331  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:25.480352  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:25.480513  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.480580  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.482813  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.56288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34754]
I0210 16:46:25.483004  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (2.142194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34752]
I0210 16:46:25.484380  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49/status: (3.336847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34750]
I0210 16:46:25.486712  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (1.244572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34752]
I0210 16:46:25.487039  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.487288  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:25.487312  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:25.487555  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.487648  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.491551  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (3.29293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34754]
I0210 16:46:25.491647  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (3.545537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34752]
I0210 16:46:25.493459  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47/status: (1.880857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34756]
I0210 16:46:25.496226  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.782055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34754]
I0210 16:46:25.496533  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.496678  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:25.496696  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49
I0210 16:46:25.496767  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.496814  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.500482  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (2.631312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34752]
I0210 16:46:25.500901  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49/status: (3.757525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34754]
I0210 16:46:25.502399  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-49.15820e84d75d61a9: (4.683223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34758]
I0210 16:46:25.504952  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-49: (2.952635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34754]
I0210 16:46:25.505460  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.505848  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:25.505894  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47
I0210 16:46:25.506032  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.506106  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.507888  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.2454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34758]
I0210 16:46:25.508879  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47/status: (2.0278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34752]
I0210 16:46:25.509629  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-47.15820e84d7c8dad6: (2.50144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34760]
I0210 16:46:25.510725  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-47: (1.144826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34752]
I0210 16:46:25.511011  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.511228  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:25.511279  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43
I0210 16:46:25.511407  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.511471  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.513157  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.090763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34760]
I0210 16:46:25.514188  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43/status: (1.708119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34758]
I0210 16:46:25.515462  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-43.15820e84d59e3d29: (2.593384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34762]
I0210 16:46:25.516398  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-43: (1.816858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34758]
I0210 16:46:25.516744  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.516924  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:25.516946  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:25.517075  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.517154  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.518906  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.234764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34760]
I0210 16:46:25.519549  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45/status: (2.126257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34762]
I0210 16:46:25.520588  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.755964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34764]
I0210 16:46:25.521516  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.545668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34762]
I0210 16:46:25.521797  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.522038  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:25.522056  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41
I0210 16:46:25.522222  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.522296  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.523725  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.146545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34764]
I0210 16:46:25.525358  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41/status: (2.567041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34760]
I0210 16:46:25.525775  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-41.15820e84d53454e0: (2.439345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34766]
I0210 16:46:25.526968  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-41: (1.032644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34760]
I0210 16:46:25.527285  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.527519  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:25.527561  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45
I0210 16:46:25.527678  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.527748  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.529650  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.094154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34764]
I0210 16:46:25.530394  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45/status: (1.821295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34766]
I0210 16:46:25.530885  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-45.15820e84d98b7cd2: (2.267181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34768]
I0210 16:46:25.531869  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-45: (1.057886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34766]
I0210 16:46:25.532181  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.532344  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:25.532366  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:25.532513  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.532568  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.533813  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.019103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34764]
I0210 16:46:25.534476  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44/status: (1.702837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34768]
I0210 16:46:25.534784  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.365537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34770]
I0210 16:46:25.536118  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.120991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34768]
I0210 16:46:25.536425  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.536618  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:25.536641  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38
I0210 16:46:25.536758  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.536808  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.539331  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (2.273161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34764]
I0210 16:46:25.539960  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38/status: (2.898415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34770]
I0210 16:46:25.540139  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-38.15820e84d4dacbdf: (2.623051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34772]
I0210 16:46:25.541386  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-38: (1.010035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34770]
I0210 16:46:25.541622  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.541798  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:25.541816  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44
I0210 16:46:25.541932  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.541980  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.543614  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.378214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34764]
I0210 16:46:25.543781  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44/status: (1.57281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34770]
I0210 16:46:25.545154  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-44.15820e84da76b5d8: (2.115842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34774]
I0210 16:46:25.545207  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-44: (1.08902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34770]
I0210 16:46:25.545624  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.545782  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:25.545799  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42
I0210 16:46:25.545874  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.545914  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.547857  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.263794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34764]
I0210 16:46:25.547926  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.374694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I0210 16:46:25.548087  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42/status: (1.95895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34774]
I0210 16:46:25.549735  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-42: (1.25171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I0210 16:46:25.549974  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.550144  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:25.550158  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:25.550251  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.550300  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.551580  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.065153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I0210 16:46:25.552060  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.175398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34778]
I0210 16:46:25.552689  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40/status: (2.1319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34764]
I0210 16:46:25.554197  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.043545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34778]
I0210 16:46:25.554437  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.554634  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:25.554674  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37
I0210 16:46:25.554793  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.554849  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.556460  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.119514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I0210 16:46:25.556877  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37/status: (1.480115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34778]
I0210 16:46:25.557677  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-37.15820e84d4917048: (2.140509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34780]
I0210 16:46:25.558357  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-37: (1.046035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34778]
I0210 16:46:25.558649  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.558787  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:25.558803  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40
I0210 16:46:25.558890  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.558934  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.560463  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (1.299897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I0210 16:46:25.560597  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40/status: (1.438582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34780]
I0210 16:46:25.561456  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (1.054411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34784]
I0210 16:46:25.561549  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-40.15820e84db8540d4: (1.76926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34782]
I0210 16:46:25.562108  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-40: (952.135µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34780]
I0210 16:46:25.562382  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.562585  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:25.562607  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:25.562701  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.562743  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.563922  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (977.56µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34784]
I0210 16:46:25.564591  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.378904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34786]
I0210 16:46:25.564595  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39/status: (1.63661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I0210 16:46:25.565931  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (932.984µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I0210 16:46:25.566230  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.566356  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:25.566375  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:25.566519  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.566572  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.567915  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.098787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34784]
I0210 16:46:25.568351  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.233396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34788]
I0210 16:46:25.568446  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36/status: (1.661948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34776]
I0210 16:46:25.570030  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.044748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34784]
I0210 16:46:25.570270  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.570430  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:25.570445  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39
I0210 16:46:25.570545  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.570593  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.571845  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (1.061437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34784]
I0210 16:46:25.572362  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39/status: (1.588783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34788]
I0210 16:46:25.573667  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-39.15820e84dc43252a: (2.312274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34790]
I0210 16:46:25.573730  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-39: (960.895µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34788]
I0210 16:46:25.573993  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.574132  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:25.574147  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36
I0210 16:46:25.574227  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.574265  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.575649  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.073578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34784]
I0210 16:46:25.575790  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36/status: (1.343957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34790]
I0210 16:46:25.577685  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-36.15820e84dc7d8c1f: (2.226977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34792]
I0210 16:46:25.577688  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-36: (1.261177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34784]
I0210 16:46:25.577952  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.578180  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:25.578199  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:25.578311  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.578364  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.579786  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.163711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34790]
I0210 16:46:25.580070  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.282377ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34794]
I0210 16:46:25.580332  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35/status: (1.716199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34784]
I0210 16:46:25.581689  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (933.98µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34794]
I0210 16:46:25.581969  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.582140  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:25.582157  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:25.582266  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.582318  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.583519  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.020666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34794]
I0210 16:46:25.584036  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34/status: (1.454945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34790]
I0210 16:46:25.584390  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.444293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34796]
I0210 16:46:25.585411  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.005044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34790]
I0210 16:46:25.585780  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.585938  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:25.585953  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35
I0210 16:46:25.586053  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.586101  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.587955  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35/status: (1.607529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34796]
I0210 16:46:25.588351  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.998775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34794]
I0210 16:46:25.589451  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-35: (1.128157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34796]
I0210 16:46:25.589628  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-35.15820e84dd317a1e: (2.817827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34798]
I0210 16:46:25.589727  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.589830  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:25.589888  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34
I0210 16:46:25.589965  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.590021  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.591439  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.189625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34794]
I0210 16:46:25.591735  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34/status: (1.507268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34796]
I0210 16:46:25.593358  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-34.15820e84dd6dcfb1: (2.5757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34800]
I0210 16:46:25.593378  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-34: (1.236205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34796]
I0210 16:46:25.593626  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.593774  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:25.593789  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:25.593875  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.593916  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.596096  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.637214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34802]
I0210 16:46:25.596598  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33/status: (2.140855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34800]
I0210 16:46:25.596788  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (2.372869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34794]
I0210 16:46:25.597997  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.013536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34800]
I0210 16:46:25.598265  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.598425  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:25.598441  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23
I0210 16:46:25.598531  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.598572  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.600384  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.230765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34802]
I0210 16:46:25.600545  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23/status: (1.769478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34794]
I0210 16:46:25.601334  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-23.15820e84d343eea6: (2.1638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34804]
I0210 16:46:25.602099  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-23: (1.137399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34794]
I0210 16:46:25.602332  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.602511  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:25.602528  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33
I0210 16:46:25.602617  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.602660  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.604458  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.478697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34802]
I0210 16:46:25.604774  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33/status: (1.849885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34804]
I0210 16:46:25.605711  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-33.15820e84de1ed0ff: (2.316557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34806]
I0210 16:46:25.606485  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-33: (1.266924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34804]
I0210 16:46:25.606799  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.607145  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:25.607222  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:25.607472  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.607538  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.609329  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32/status: (1.537202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34806]
I0210 16:46:25.609431  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.658403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34802]
I0210 16:46:25.609705  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.572639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34808]
I0210 16:46:25.611064  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.141989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34802]
I0210 16:46:25.611466  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.611640  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:25.611656  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:25.611742  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.611786  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.613247  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.200095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34806]
I0210 16:46:25.613752  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.2769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34810]
I0210 16:46:25.613851  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31/status: (1.798734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34808]
I0210 16:46:25.615302  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.072473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34808]
I0210 16:46:25.615549  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.615712  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:25.615726  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32
I0210 16:46:25.615828  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.615877  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.617765  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.271339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34806]
I0210 16:46:25.618486  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32/status: (1.92206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34808]
I0210 16:46:25.619543  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-32.15820e84deee9e56: (2.961247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34812]
I0210 16:46:25.619956  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-32: (1.066847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34808]
I0210 16:46:25.620236  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.620395  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:25.620418  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31
I0210 16:46:25.620661  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.620781  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.622042  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.071763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34812]
I0210 16:46:25.622945  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31/status: (1.683524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34806]
I0210 16:46:25.623898  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-31.15820e84df2f7cdf: (2.088372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34814]
I0210 16:46:25.624717  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-31: (1.018277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34806]
I0210 16:46:25.624970  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.625127  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:25.625144  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:25.625254  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.625304  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.626625  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.082665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34812]
I0210 16:46:25.627289  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.54269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0210 16:46:25.627567  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30/status: (2.023415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34814]
I0210 16:46:25.629213  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.251007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0210 16:46:25.629465  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.629638  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:25.629652  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:25.629739  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.629785  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.631698  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.326492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34818]
I0210 16:46:25.631798  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29/status: (1.718254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0210 16:46:25.632202  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (2.12105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34812]
I0210 16:46:25.633212  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.057966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34816]
I0210 16:46:25.633465  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.633639  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:25.633656  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30
I0210 16:46:25.633746  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.633793  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.635033  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.040628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34818]
I0210 16:46:25.635574  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30/status: (1.525801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34812]
I0210 16:46:25.637086  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-30: (1.231081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34812]
I0210 16:46:25.637101  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-30.15820e84dffdc229: (2.325838ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34820]
I0210 16:46:25.637328  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.637513  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:25.637536  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29
I0210 16:46:25.637626  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.637668  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.639295  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (899.1µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34818]
I0210 16:46:25.639450  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29/status: (1.54323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34812]
I0210 16:46:25.640709  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-29.15820e84e042219a: (2.251012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.640898  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-29: (1.013889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34812]
I0210 16:46:25.641129  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.641284  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:25.641299  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:25.641387  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.641523  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.642826  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.126906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.643436  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (1.433864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0210 16:46:25.643653  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28/status: (1.915517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34818]
I0210 16:46:25.645129  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.064245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0210 16:46:25.645398  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.645591  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:25.645628  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:25.645742  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.645790  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.669065  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27/status: (22.806362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0210 16:46:25.669106  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (22.932893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.697381  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (49.970305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34826]
I0210 16:46:25.697381  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/preemptor-pod: (30.767845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34828]
I0210 16:46:25.697691  123305 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0210 16:46:25.698737  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (29.152156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.699026  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.699229  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:25.699240  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-0: (1.379224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34828]
I0210 16:46:25.699248  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28
I0210 16:46:25.699335  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.699385  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.700961  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-1: (1.09965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34830]
I0210 16:46:25.701408  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.767034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0210 16:46:25.702320  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28/status: (2.635311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.703215  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-28.15820e84e0f52608: (3.129493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34832]
I0210 16:46:25.703523  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-2: (1.481629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34830]
I0210 16:46:25.703874  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-28: (1.104495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.704136  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.704314  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:25.704333  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27
I0210 16:46:25.704441  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.704483  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.704850  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-3: (990.882µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34832]
I0210 16:46:25.706814  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (1.858307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0210 16:46:25.706820  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27/status: (1.669085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.707338  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-27.15820e84e13656ee: (2.016324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34834]
I0210 16:46:25.707452  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-4: (1.791204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34832]
I0210 16:46:25.708103  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-27: (944.345µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0210 16:46:25.708399  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.708570  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:25.708630  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:25.708713  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.708754  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.708824  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-5: (1.000431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34832]
I0210 16:46:25.710636  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26/status: (1.666142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0210 16:46:25.711057  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (1.909842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34832]
I0210 16:46:25.711233  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.038465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.711507  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-6: (2.076901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34836]
I0210 16:46:25.712020  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (1.00049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0210 16:46:25.712297  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.712519  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:25.712536  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21
I0210 16:46:25.712652  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.712693  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.712936  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-7: (1.040061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.714025  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (916.557µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34832]
I0210 16:46:25.714764  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-8: (1.293169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34838]
I0210 16:46:25.715239  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21/status: (2.144569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0210 16:46:25.715763  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-21.15820e84d200d032: (2.44705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.716400  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-9: (1.160567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34838]
I0210 16:46:25.717284  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-21: (1.188858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0210 16:46:25.717533  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.717705  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:25.717723  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26
I0210 16:46:25.717811  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.717855  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.718024  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-10: (1.074535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.719710  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-11: (1.217312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.720180  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26/status: (1.821093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0210 16:46:25.720236  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (1.708202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34832]
I0210 16:46:25.721026  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-12: (991.16µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.721389  123305 wrap.go:47] PATCH /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events/ppod-26.15820e84e4f7183a: (2.711188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34840]
I0210 16:46:25.721565  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-26: (877.57µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34824]
I0210 16:46:25.721808  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.721966  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:25.721993  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25
I0210 16:46:25.722074  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.722118  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.722364  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-13: (982.192µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.723571  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (1.004638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34832]
I0210 16:46:25.724375  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-14: (1.374873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34842]
I0210 16:46:25.724431  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25/status: (1.870963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34840]
I0210 16:46:25.725240  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.565224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.725759  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-15: (1.055464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34842]
I0210 16:46:25.725835  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-25: (945.565µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34832]
I0210 16:46:25.726101  123305 generic_scheduler.go:1116] Node node1 is a potential node for preemption.
I0210 16:46:25.726247  123305 scheduling_queue.go:868] About to try and schedule pod preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:25.726266  123305 scheduler.go:453] Attempting to schedule pod: preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22
I0210 16:46:25.726377  123305 factory.go:647] Unable to schedule preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0210 16:46:25.726420  123305 factory.go:742] Updating pod condition for preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0210 16:46:25.727113  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-16: (990.461µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34842]
I0210 16:46:25.728045  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22: (1.175776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34844]
I0210 16:46:25.728354  123305 wrap.go:47] PUT /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-22/status: (1.701617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34822]
I0210 16:46:25.728549  123305 wrap.go:47] GET /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/pods/ppod-17: (1.032698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34842]
I0210 16:46:25.729183  123305 wrap.go:47] POST /api/v1/namespaces/preemption-race647dbe20-2d53-11e9-8f72-0242ac110002/events: (2.071311ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34846]
I0210 16:46:25.729863  123305 w