This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 1419 succeeded
Started2019-05-13 17:15
Elapsed32m7s
Revision
Buildergke-prow-containerd-pool-99179761-dg09
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/6635fa99-fc66-4fa1-89dd-0afac56e21a4/targets/test'}}
pod89c96a2d-75a2-11e9-bdf5-0a580a6c1546
resultstorehttps://source.cloud.google.com/results/invocations/6635fa99-fc66-4fa1-89dd-0afac56e21a4/targets/test
infra-commitc92d8e09a
pod89c96a2d-75a2-11e9-bdf5-0a580a6c1546
repok8s.io/kubernetes
repo-commitbfce916eb15ed270daff49b63ff121cc38f5c542
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 25s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0513 17:39:07.961879  108115 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0513 17:39:07.961944  108115 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0513 17:39:07.961966  108115 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0513 17:39:07.962001  108115 master.go:233] Using reconciler: 
I0513 17:39:07.964840  108115 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:07.964974  108115 client.go:354] parsed scheme: ""
I0513 17:39:07.964998  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:07.965046  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:07.965121  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:07.965686  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:07.966083  108115 store.go:1320] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0513 17:39:07.966132  108115 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:07.966360  108115 client.go:354] parsed scheme: ""
I0513 17:39:07.966389  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:07.966435  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:07.966665  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:07.966729  108115 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0513 17:39:07.966963  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:07.967678  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:07.967961  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:07.969194  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:07.969971  108115 store.go:1320] Monitoring events count at <storage-prefix>//events
I0513 17:39:07.970025  108115 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:07.970134  108115 client.go:354] parsed scheme: ""
I0513 17:39:07.970159  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:07.970274  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:07.970350  108115 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0513 17:39:07.970616  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:07.971132  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:07.979045  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:07.981239  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:07.994419  108115 store.go:1320] Monitoring limitranges count at <storage-prefix>//limitranges
I0513 17:39:07.994494  108115 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0513 17:39:07.994588  108115 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:07.994854  108115 client.go:354] parsed scheme: ""
I0513 17:39:07.994986  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:07.995145  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:07.995217  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:07.995768  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:07.995800  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:07.995874  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:07.999543  108115 store.go:1320] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0513 17:39:07.999646  108115 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0513 17:39:08.001234  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.002633  108115 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.002858  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.002925  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.003097  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.003289  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.010195  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.010313  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.011711  108115 store.go:1320] Monitoring secrets count at <storage-prefix>//secrets
I0513 17:39:08.011831  108115 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0513 17:39:08.013126  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.024143  108115 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.024419  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.024519  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.024626  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.024778  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.026518  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.026716  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.027124  108115 store.go:1320] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0513 17:39:08.027207  108115 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0513 17:39:08.028560  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.036372  108115 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.037522  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.039608  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.039694  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.039824  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.040264  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.040568  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.040575  108115 store.go:1320] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0513 17:39:08.040611  108115 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0513 17:39:08.041027  108115 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.041168  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.041205  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.041259  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.041327  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.041953  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.042030  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.042299  108115 store.go:1320] Monitoring configmaps count at <storage-prefix>//configmaps
I0513 17:39:08.042491  108115 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0513 17:39:08.043003  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.044589  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.045018  108115 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.045200  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.045234  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.045326  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.045445  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.045886  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.045948  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.046153  108115 store.go:1320] Monitoring namespaces count at <storage-prefix>//namespaces
I0513 17:39:08.046215  108115 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0513 17:39:08.046639  108115 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.046765  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.046821  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.046867  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.046959  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.047099  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.047728  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.047806  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.047984  108115 store.go:1320] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0513 17:39:08.048067  108115 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0513 17:39:08.048196  108115 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.048837  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.049138  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.049186  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.049345  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.049417  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.050239  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.050287  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.050460  108115 store.go:1320] Monitoring nodes count at <storage-prefix>//minions
I0513 17:39:08.050554  108115 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0513 17:39:08.050836  108115 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.051447  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.051580  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.051641  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.051715  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.052141  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.052822  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.052870  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.053087  108115 store.go:1320] Monitoring pods count at <storage-prefix>//pods
I0513 17:39:08.053128  108115 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0513 17:39:08.054068  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.054641  108115 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.054925  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.055012  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.055090  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.055198  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.056419  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.056625  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.057536  108115 store.go:1320] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0513 17:39:08.057705  108115 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.058088  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.058123  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.058172  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.058219  108115 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0513 17:39:08.058363  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.058848  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.059003  108115 store.go:1320] Monitoring services count at <storage-prefix>//services/specs
I0513 17:39:08.059033  108115 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.059125  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.059140  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.059167  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.059218  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.059332  108115 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0513 17:39:08.059549  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.059817  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.059983  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.060003  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.060031  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.060131  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.060229  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.060554  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.060833  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.061173  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.061540  108115 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.061732  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.061755  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.061818  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.061884  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.062195  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.062306  108115 store.go:1320] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0513 17:39:08.062526  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.062563  108115 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0513 17:39:08.063842  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.066223  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.080134  108115 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0513 17:39:08.080180  108115 master.go:425] Enabling API group "authentication.k8s.io".
I0513 17:39:08.080195  108115 master.go:425] Enabling API group "authorization.k8s.io".
I0513 17:39:08.080415  108115 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.080592  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.080611  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.080678  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.080737  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.081062  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.081193  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.081330  108115 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0513 17:39:08.081460  108115 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0513 17:39:08.081574  108115 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.081684  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.081724  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.081785  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.081860  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.082138  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.082246  108115 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0513 17:39:08.082348  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.082397  108115 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0513 17:39:08.082417  108115 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.082496  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.082521  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.082549  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.082689  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.082892  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.082975  108115 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0513 17:39:08.082988  108115 master.go:425] Enabling API group "autoscaling".
I0513 17:39:08.083105  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.083320  108115 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.083409  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.083420  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.083451  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.083478  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.083559  108115 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0513 17:39:08.083663  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.084334  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.084986  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.085133  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.085357  108115 store.go:1320] Monitoring jobs.batch count at <storage-prefix>//jobs
I0513 17:39:08.085562  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.085692  108115 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.085786  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.085804  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.085866  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.085483  108115 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0513 17:39:08.086228  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.086806  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.086964  108115 store.go:1320] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0513 17:39:08.086980  108115 master.go:425] Enabling API group "batch".
I0513 17:39:08.087155  108115 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.087240  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.087254  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.087283  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.087302  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.087381  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.087411  108115 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0513 17:39:08.087844  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.090359  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.090925  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.091106  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.091143  108115 store.go:1320] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0513 17:39:08.091177  108115 master.go:425] Enabling API group "certificates.k8s.io".
I0513 17:39:08.091246  108115 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0513 17:39:08.091476  108115 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.091566  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.091581  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.091612  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.091735  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.091942  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.092133  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.092195  108115 store.go:1320] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0513 17:39:08.092227  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.092234  108115 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0513 17:39:08.092398  108115 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.092710  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.092728  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.092765  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.092858  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.093230  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.093279  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.093411  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.093454  108115 store.go:1320] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0513 17:39:08.093489  108115 master.go:425] Enabling API group "coordination.k8s.io".
I0513 17:39:08.093658  108115 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.093728  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.093738  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.093793  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.093856  108115 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0513 17:39:08.093979  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.094917  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.099195  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.099305  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.099483  108115 store.go:1320] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0513 17:39:08.099554  108115 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0513 17:39:08.099745  108115 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.099865  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.099931  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.100015  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.100084  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.100372  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.100603  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.100713  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.100900  108115 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0513 17:39:08.100976  108115 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0513 17:39:08.101137  108115 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.101232  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.101283  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.101350  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.101781  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.104799  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.104950  108115 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0513 17:39:08.105044  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.105357  108115 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.105410  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.105431  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.105454  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.105498  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.105622  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.105621  108115 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0513 17:39:08.106133  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.106565  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.106642  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.106786  108115 store.go:1320] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0513 17:39:08.106883  108115 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0513 17:39:08.106973  108115 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.107057  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.107423  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.107530  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.107685  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.107912  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.107988  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.108145  108115 store.go:1320] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0513 17:39:08.108248  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.108339  108115 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.108409  108115 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0513 17:39:08.108413  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.108633  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.108671  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.108716  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.109539  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.109558  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.109727  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.109851  108115 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0513 17:39:08.109913  108115 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0513 17:39:08.110022  108115 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.110107  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.110124  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.110157  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.110233  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.110434  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.110584  108115 store.go:1320] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0513 17:39:08.110606  108115 master.go:425] Enabling API group "extensions".
I0513 17:39:08.110759  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.110804  108115 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.110869  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.110885  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.110920  108115 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0513 17:39:08.110939  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.111094  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.111154  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.111364  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.111612  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.111784  108115 store.go:1320] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0513 17:39:08.111926  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.112041  108115 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0513 17:39:08.112329  108115 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.112547  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.112597  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.112647  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.112962  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.113012  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.113294  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.113400  108115 store.go:1320] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0513 17:39:08.113452  108115 master.go:425] Enabling API group "networking.k8s.io".
I0513 17:39:08.113519  108115 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.113577  108115 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0513 17:39:08.113634  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.113659  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.113545  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.113748  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.113840  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.114147  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.114313  108115 store.go:1320] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0513 17:39:08.114336  108115 master.go:425] Enabling API group "node.k8s.io".
I0513 17:39:08.114397  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.114449  108115 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0513 17:39:08.114484  108115 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.114935  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.114955  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.114989  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.115053  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.115599  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.116007  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.116253  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.116316  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.116426  108115 store.go:1320] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0513 17:39:08.116571  108115 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0513 17:39:08.117542  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.117781  108115 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.117866  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.118074  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.118287  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.118361  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.118682  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.118752  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.118994  108115 store.go:1320] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0513 17:39:08.119014  108115 master.go:425] Enabling API group "policy".
I0513 17:39:08.119073  108115 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0513 17:39:08.119084  108115 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.119262  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.119781  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.119865  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.119888  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.119958  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.120540  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.120662  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.120770  108115 store.go:1320] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0513 17:39:08.120836  108115 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0513 17:39:08.120929  108115 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.121007  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.121029  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.121072  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.121167  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.121718  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.121822  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.122031  108115 store.go:1320] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0513 17:39:08.121874  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.122078  108115 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.122109  108115 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0513 17:39:08.122158  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.122172  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.122199  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.122536  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.123282  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.123360  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.123562  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.123574  108115 store.go:1320] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0513 17:39:08.123823  108115 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.123967  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.124403  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.124016  108115 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0513 17:39:08.124454  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.124609  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.125084  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.125119  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.125279  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.125287  108115 store.go:1320] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0513 17:39:08.125306  108115 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0513 17:39:08.125332  108115 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.125427  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.125451  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.125538  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.125654  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.125980  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.126011  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.126077  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.126174  108115 store.go:1320] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0513 17:39:08.126250  108115 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0513 17:39:08.126325  108115 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.126425  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.126442  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.126525  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.126587  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.126968  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.127003  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.127055  108115 store.go:1320] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0513 17:39:08.127103  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.127098  108115 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.127158  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.127168  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.127223  108115 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0513 17:39:08.127226  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.127343  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.127614  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.127678  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.127724  108115 store.go:1320] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0513 17:39:08.127789  108115 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0513 17:39:08.127898  108115 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.127979  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.127992  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.128046  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.128106  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.128133  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.128390  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.128514  108115 store.go:1320] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0513 17:39:08.128543  108115 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0513 17:39:08.128564  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.128597  108115 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0513 17:39:08.128695  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.129259  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.130772  108115 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.130851  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.130868  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.130905  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.130959  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.131278  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.131359  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.131400  108115 store.go:1320] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0513 17:39:08.131437  108115 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0513 17:39:08.131596  108115 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.131665  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.131680  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.131729  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.131852  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.132205  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.132251  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.132239  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.132345  108115 store.go:1320] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0513 17:39:08.132399  108115 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0513 17:39:08.132424  108115 master.go:425] Enabling API group "scheduling.k8s.io".
I0513 17:39:08.132725  108115 master.go:417] Skipping disabled API group "settings.k8s.io".
I0513 17:39:08.132981  108115 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.133077  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.133097  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.133135  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.133180  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.133304  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.133431  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.133517  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.133611  108115 store.go:1320] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0513 17:39:08.133677  108115 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0513 17:39:08.133801  108115 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.133871  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.133889  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.133945  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.134188  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.134557  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.134612  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.134656  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.134836  108115 store.go:1320] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0513 17:39:08.134907  108115 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0513 17:39:08.134902  108115 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.135103  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.135143  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.135186  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.135247  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.135476  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.135619  108115 store.go:1320] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0513 17:39:08.135645  108115 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.135662  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.135737  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.135764  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.135768  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.135790  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.135865  108115 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0513 17:39:08.136122  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.136657  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.136794  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.136829  108115 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0513 17:39:08.136809  108115 store.go:1320] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0513 17:39:08.137198  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.137227  108115 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.137327  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.137368  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.137413  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.137796  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.138081  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.138455  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.138549  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.138641  108115 store.go:1320] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0513 17:39:08.138750  108115 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0513 17:39:08.138816  108115 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.138876  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.138892  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.138923  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.138972  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.139196  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.139316  108115 store.go:1320] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0513 17:39:08.139344  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.139365  108115 master.go:425] Enabling API group "storage.k8s.io".
I0513 17:39:08.139450  108115 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0513 17:39:08.139541  108115 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.139571  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.139619  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.139639  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.139688  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.139731  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.139975  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.140117  108115 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0513 17:39:08.140155  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.140173  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.140276  108115 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.140248  108115 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0513 17:39:08.140363  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.140385  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.140420  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.140540  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.140966  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.141003  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.141123  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.141146  108115 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0513 17:39:08.141126  108115 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0513 17:39:08.141352  108115 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.141434  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.141450  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.141492  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.141570  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.142181  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.142258  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.142285  108115 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0513 17:39:08.142354  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.142305  108115 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0513 17:39:08.142555  108115 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.142851  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.142895  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.142953  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.143016  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.143223  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.143276  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.143224  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.143453  108115 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0513 17:39:08.143544  108115 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0513 17:39:08.143659  108115 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.143726  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.143740  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.143774  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.143835  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.144190  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.144226  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.144309  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.144328  108115 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0513 17:39:08.144353  108115 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0513 17:39:08.144601  108115 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.144844  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.144907  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.144991  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.145123  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.145220  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.145535  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.145586  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.145736  108115 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0513 17:39:08.145791  108115 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0513 17:39:08.145889  108115 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.145956  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.145973  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.146005  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.146050  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.146270  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.146315  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.146377  108115 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0513 17:39:08.146396  108115 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0513 17:39:08.146565  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.146694  108115 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.146847  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.146864  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.146890  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.146995  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.147085  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.147901  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.147991  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.148074  108115 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0513 17:39:08.148109  108115 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0513 17:39:08.148234  108115 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.148309  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.148338  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.148383  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.148431  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.148725  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.148845  108115 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0513 17:39:08.149172  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.149158  108115 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.149297  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.149298  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.149389  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.149417  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.149460  108115 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0513 17:39:08.149554  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.149921  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.150032  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.150082  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.150211  108115 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0513 17:39:08.150294  108115 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0513 17:39:08.150648  108115 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.150912  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.150931  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.150961  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.151014  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.151101  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.151418  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.151523  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.151581  108115 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0513 17:39:08.151624  108115 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0513 17:39:08.151763  108115 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.152047  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.152069  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.152116  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.152226  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.152487  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.152532  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.152663  108115 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0513 17:39:08.152686  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.152732  108115 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0513 17:39:08.152880  108115 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.152946  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.152974  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.153022  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.153517  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.153601  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.153806  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.153954  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.153957  108115 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0513 17:39:08.153978  108115 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0513 17:39:08.154021  108115 master.go:425] Enabling API group "apps".
I0513 17:39:08.154108  108115 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.154191  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.154227  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.154276  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.154356  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.154695  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.154758  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.154908  108115 store.go:1320] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0513 17:39:08.154931  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.154970  108115 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.154956  108115 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0513 17:39:08.155080  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.155111  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.155154  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.155549  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.155780  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.155851  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.155975  108115 store.go:1320] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0513 17:39:08.156005  108115 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0513 17:39:08.156011  108115 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0513 17:39:08.155980  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.156038  108115 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7190bb1e-4776-4f00-b186-51efacf597c5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 17:39:08.156257  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.156277  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.156357  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.156424  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.156697  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.156910  108115 store.go:1320] Monitoring events count at <storage-prefix>//events
I0513 17:39:08.156936  108115 master.go:425] Enabling API group "events.k8s.io".
I0513 17:39:08.157148  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
I0513 17:39:08.157404  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.157554  108115 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0513 17:39:08.158280  108115 watch_cache.go:405] Replace watchCache (rev: 23813) 
W0513 17:39:08.163021  108115 genericapiserver.go:347] Skipping API batch/v2alpha1 because it has no resources.
W0513 17:39:08.171700  108115 genericapiserver.go:347] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0513 17:39:08.176765  108115 genericapiserver.go:347] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0513 17:39:08.178025  108115 genericapiserver.go:347] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0513 17:39:08.180710  108115 genericapiserver.go:347] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0513 17:39:08.194709  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.194827  108115 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0513 17:39:08.194845  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.194859  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.194867  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.194901  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.195056  108115 wrap.go:47] GET /healthz: (560.302µs) 500
goroutine 29976 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0148899d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0148899d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0148b82a0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0147c8c70, 0xc00cbd2680, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce400)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce400)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce400)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce400)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce300)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0147c8c70, 0xc0148ce300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01457b320, 0xc0114144a0, 0x7374040, 0xc0147c8c70, 0xc0148ce300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46542]
I0513 17:39:08.195905  108115 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.247372ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46544]
I0513 17:39:08.198921  108115 wrap.go:47] GET /api/v1/services: (1.072635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46544]
I0513 17:39:08.202592  108115 wrap.go:47] GET /api/v1/services: (956.421µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46544]
I0513 17:39:08.204844  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.204949  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.204994  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.205014  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.205051  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.205219  108115 wrap.go:47] GET /healthz: (675.012µs) 500
goroutine 29998 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0149360e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0149360e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013b8bee0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc01112eee0, 0xc002638a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d400)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d400)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d400)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d400)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d300)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc01112eee0, 0xc013a1d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013c154a0, 0xc0114144a0, 0x7374040, 0xc01112eee0, 0xc013a1d300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46544]
I0513 17:39:08.206215  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.163719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46542]
I0513 17:39:08.206882  108115 wrap.go:47] GET /api/v1/services: (1.280201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46544]
I0513 17:39:08.207281  108115 wrap.go:47] GET /api/v1/services: (969.145µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:08.207998  108115 wrap.go:47] POST /api/v1/namespaces: (1.277263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46542]
I0513 17:39:08.209430  108115 wrap.go:47] GET /api/v1/namespaces/kube-public: (800.151µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:08.211114  108115 wrap.go:47] POST /api/v1/namespaces: (1.266051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:08.212328  108115 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (861.464µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:08.214227  108115 wrap.go:47] POST /api/v1/namespaces: (1.357298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:08.295967  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.296006  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.296020  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.296030  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.296052  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.296201  108115 wrap.go:47] GET /healthz: (397.232µs) 500
goroutine 29857 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0103acaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0103acaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013c98ea0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0112aaab8, 0xc0149e6300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a73000)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a73000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a73000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a73000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a73000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a73000)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a73000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a73000)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a73000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a73000)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a73000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a72f00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0112aaab8, 0xc013a72f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149b80c0, 0xc0114144a0, 0x7374040, 0xc0112aaab8, 0xc013a72f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46546]
I0513 17:39:08.306118  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.306158  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.306171  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.306180  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.306188  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.306583  108115 wrap.go:47] GET /healthz: (624.664µs) 500
goroutine 30034 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0149a8700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0149a8700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0149da400, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0147c8e58, 0xc006c29e00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8600)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8600)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8600)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8600)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8500)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0147c8e58, 0xc0149d8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149fe120, 0xc0114144a0, 0x7374040, 0xc0147c8e58, 0xc0149d8500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:08.396332  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.396439  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.396559  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.396585  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.396600  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.396826  108115 wrap.go:47] GET /healthz: (635.357µs) 500
goroutine 30051 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0103acbd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0103acbd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013c98f40, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0112aaac0, 0xc0149e6900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73400)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73400)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73400)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73400)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73300)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0112aaac0, 0xc013a73300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149b8180, 0xc0114144a0, 0x7374040, 0xc0112aaac0, 0xc013a73300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46546]
I0513 17:39:08.406176  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.406216  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.406229  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.406261  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.406268  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.406478  108115 wrap.go:47] GET /healthz: (432.113µs) 500
goroutine 30053 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0103acd20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0103acd20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013c98fe0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0112aaac8, 0xc0149e6f00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73800)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73800)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73800)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73800)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73700)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0112aaac8, 0xc013a73700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149b8240, 0xc0114144a0, 0x7374040, 0xc0112aaac8, 0xc013a73700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:08.496633  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.496672  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.496683  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.496692  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.496699  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.496870  108115 wrap.go:47] GET /healthz: (415.679µs) 500
goroutine 29336 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0103464d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0103464d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0134c6b40, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc000b69ee8, 0xc002f04900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451f00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451f00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451f00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451f00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451e00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc000b69ee8, 0xc012451e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0134fb1a0, 0xc0114144a0, 0x7374040, 0xc000b69ee8, 0xc012451e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46546]
I0513 17:39:08.506078  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.506111  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.506181  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.506195  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.506231  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.506443  108115 wrap.go:47] GET /healthz: (531.974µs) 500
goroutine 30036 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0149a8850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0149a8850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0149da680, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0147c8e80, 0xc014a16600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8d00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8d00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8d00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8d00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8c00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0147c8e80, 0xc0149d8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149fe300, 0xc0114144a0, 0x7374040, 0xc0147c8e80, 0xc0149d8c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:08.595942  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.595982  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.595995  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.596004  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.596011  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.596179  108115 wrap.go:47] GET /healthz: (386.483µs) 500
goroutine 30055 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0103acee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0103acee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013c99080, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0112aaad0, 0xc0149e7500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73c00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73c00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73c00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73c00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73b00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0112aaad0, 0xc013a73b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149b8300, 0xc0114144a0, 0x7374040, 0xc0112aaad0, 0xc013a73b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46546]
I0513 17:39:08.606102  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.606146  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.606159  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.606169  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.606176  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.606353  108115 wrap.go:47] GET /healthz: (406.743µs) 500
goroutine 30057 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0103ad030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0103ad030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013c99120, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0112aaad8, 0xc0149e7b00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0112aaad8, 0xc014a96000)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0112aaad8, 0xc014a96000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0112aaad8, 0xc014a96000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aaad8, 0xc014a96000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aaad8, 0xc014a96000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0112aaad8, 0xc014a96000)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0112aaad8, 0xc014a96000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0112aaad8, 0xc014a96000)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0112aaad8, 0xc014a96000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0112aaad8, 0xc014a96000)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0112aaad8, 0xc014a96000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0112aaad8, 0xc013a73f00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0112aaad8, 0xc013a73f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149b83c0, 0xc0114144a0, 0x7374040, 0xc0112aaad8, 0xc013a73f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:08.695920  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.695961  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.695974  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.695984  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.695991  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.696145  108115 wrap.go:47] GET /healthz: (361.293µs) 500
goroutine 30038 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0149a89a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0149a89a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0149da960, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0147c8ec8, 0xc014a16d80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9500)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9500)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9400)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0147c8ec8, 0xc0149d9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149fe5a0, 0xc0114144a0, 0x7374040, 0xc0147c8ec8, 0xc0149d9400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46546]
I0513 17:39:08.706100  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.706137  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.706147  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.706155  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.706163  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.706345  108115 wrap.go:47] GET /healthz: (371.481µs) 500
goroutine 30040 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0149a8af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0149a8af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0149daae0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0147c8ed0, 0xc014a17500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9900)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9900)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9900)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9900)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9800)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0147c8ed0, 0xc0149d9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149fe6c0, 0xc0114144a0, 0x7374040, 0xc0147c8ed0, 0xc0149d9800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:08.795942  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.795986  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.795998  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.796007  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.796014  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.796179  108115 wrap.go:47] GET /healthz: (385.283µs) 500
goroutine 29338 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010346620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010346620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0134c6f40, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc000b69f10, 0xc002f05200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e600)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e600)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e600)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e600)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e500)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc000b69f10, 0xc014a5e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0134fb3e0, 0xc0114144a0, 0x7374040, 0xc000b69f10, 0xc014a5e500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46546]
I0513 17:39:08.806019  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.806054  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.806065  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.806074  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.806081  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.806202  108115 wrap.go:47] GET /healthz: (309.189µs) 500
goroutine 30059 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0103ad3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0103ad3b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013c995c0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0112aab00, 0xc014aee480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96700)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96700)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96700)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96700)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96600)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0112aab00, 0xc014a96600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149b8600, 0xc0114144a0, 0x7374040, 0xc0112aab00, 0xc014a96600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:08.895887  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.895946  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.895958  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.895966  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.895974  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.896130  108115 wrap.go:47] GET /healthz: (370.384µs) 500
goroutine 30042 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0149a8c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0149a8c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0149dab80, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0147c8ed8, 0xc014a17b00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9d00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9d00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9d00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9d00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9c00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0147c8ed8, 0xc0149d9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149fe780, 0xc0114144a0, 0x7374040, 0xc0147c8ed8, 0xc0149d9c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46546]
I0513 17:39:08.906045  108115 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 17:39:08.906081  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.906094  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.906103  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.906110  108115 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.906278  108115 wrap.go:47] GET /healthz: (392.455µs) 500
goroutine 29958 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01038b810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01038b810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014a48260, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00d6eac50, 0xc002c83e00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149cea00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149cea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149cea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149cea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149cea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149cea00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149cea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149cea00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149cea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149cea00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149cea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149ce900)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00d6eac50, 0xc0149ce900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013df2960, 0xc0114144a0, 0x7374040, 0xc00d6eac50, 0xc0149ce900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:08.961065  108115 client.go:354] parsed scheme: ""
I0513 17:39:08.961108  108115 client.go:354] scheme "" not registered, fallback to default scheme
I0513 17:39:08.961155  108115 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 17:39:08.961220  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.961748  108115 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 17:39:08.961865  108115 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 17:39:08.997867  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:08.997903  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:08.997917  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:08.997924  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:08.998086  108115 wrap.go:47] GET /healthz: (2.218379ms) 500
goroutine 30011 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010375030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010375030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01497a840, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00df78a38, 0xc004235600, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00df78a38, 0xc013315f00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00df78a38, 0xc013315f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00df78a38, 0xc013315f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00df78a38, 0xc013315f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00df78a38, 0xc013315f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00df78a38, 0xc013315f00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00df78a38, 0xc013315f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00df78a38, 0xc013315f00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00df78a38, 0xc013315f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00df78a38, 0xc013315f00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00df78a38, 0xc013315f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00df78a38, 0xc013315e00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00df78a38, 0xc013315e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc014968780, 0xc0114144a0, 0x7374040, 0xc00df78a38, 0xc013315e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46546]
I0513 17:39:09.007322  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.007356  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:09.007367  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:09.007374  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:09.007620  108115 wrap.go:47] GET /healthz: (1.691401ms) 500
goroutine 30068 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0103ad5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0103ad5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013c998e0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0112aab40, 0xc004235a20, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96e00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96e00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96e00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96e00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96d00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0112aab40, 0xc014a96d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149b8f60, 0xc0114144a0, 0x7374040, 0xc0112aab40, 0xc014a96d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:09.096863  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.096897  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:09.096907  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:09.096922  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:09.097096  108115 wrap.go:47] GET /healthz: (1.084405ms) 500
goroutine 30044 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0149a8d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0149a8d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0149dae00, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b9c2c0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e900)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e900)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e900)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e900)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e800)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0147c8f40, 0xc014b1e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149fea80, 0xc0114144a0, 0x7374040, 0xc0147c8f40, 0xc014b1e800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46546]
I0513 17:39:09.115034  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.115068  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:09.115077  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:09.115084  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:09.115245  108115 wrap.go:47] GET /healthz: (2.861235ms) 500
goroutine 30070 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0103ad6c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0103ad6c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013c99bc0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0112aab50, 0xc00d325600, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97200)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97200)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97200)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97200)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97100)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0112aab50, 0xc014a97100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149b9200, 0xc0114144a0, 0x7374040, 0xc0112aab50, 0xc014a97100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:09.195899  108115 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.289867ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:09.196368  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.753198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46544]
I0513 17:39:09.196776  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.942913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.197337  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.197354  108115 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 17:39:09.197363  108115 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 17:39:09.197371  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 17:39:09.197560  108115 wrap.go:47] GET /healthz: (1.244602ms) 500
goroutine 30027 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01036f960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01036f960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0149350c0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00dadeda8, 0xc01120d1e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931b00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931b00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931b00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931b00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931a00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00dadeda8, 0xc014931a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149329c0, 0xc0114144a0, 0x7374040, 0xc00dadeda8, 0xc014931a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46764]
I0513 17:39:09.198627  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.093995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.198635  108115 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.519195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46544]
I0513 17:39:09.199363  108115 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.545131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:09.199906  108115 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0513 17:39:09.202381  108115 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.317637ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46546]
I0513 17:39:09.202431  108115 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (3.189779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.202615  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (3.701718ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.204206  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.252377ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.204904  108115 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.765194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.205225  108115 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0513 17:39:09.205241  108115 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0513 17:39:09.206665  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.480696ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.207404  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.207424  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.207616  108115 wrap.go:47] GET /healthz: (1.812888ms) 500
goroutine 30101 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc014bac850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc014bac850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014bcf360, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc011370da0, 0xc014c58140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc011370da0, 0xc014c54300)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc011370da0, 0xc014c54300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc011370da0, 0xc014c54300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc011370da0, 0xc014c54300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc011370da0, 0xc014c54300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc011370da0, 0xc014c54300)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc011370da0, 0xc014c54300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc011370da0, 0xc014c54300)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc011370da0, 0xc014c54300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc011370da0, 0xc014c54300)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc011370da0, 0xc014c54300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc011370da0, 0xc014c54200)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc011370da0, 0xc014c54200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc014bf68a0, 0xc0114144a0, 0x7374040, 0xc011370da0, 0xc014c54200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.210237  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.262228ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.212976  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.113302ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.216537  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.433305ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.218344  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.452399ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.219567  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (875.686µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.222368  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.4298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.222803  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0513 17:39:09.226284  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (3.349682ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.228752  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.763386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.228961  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0513 17:39:09.230640  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.412731ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.233671  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.870889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.234131  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0513 17:39:09.237185  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (2.878339ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.246287  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.53645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.246728  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0513 17:39:09.250492  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (3.261994ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.253899  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.128564ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.254212  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0513 17:39:09.256532  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (2.117795ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.258669  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.696518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.258860  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0513 17:39:09.261375  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.215139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.263400  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.476545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.263689  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0513 17:39:09.265288  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.385321ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.267251  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.510016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.267422  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0513 17:39:09.268739  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.122611ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.271487  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.719303ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.271807  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0513 17:39:09.276727  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.079949ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.279019  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.749003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.279282  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0513 17:39:09.280492  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.033355ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.282417  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.512681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.282646  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0513 17:39:09.283744  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (898.607µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.285774  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.635332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.286149  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0513 17:39:09.287177  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (837.735µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.288915  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.354396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.289114  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0513 17:39:09.290310  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (990.19µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.292371  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.624151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.292638  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0513 17:39:09.293725  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (918.478µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.295687  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.492742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.295906  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0513 17:39:09.296283  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.296305  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.296483  108115 wrap.go:47] GET /healthz: (805.644µs) 500
goroutine 30158 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc014c29dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc014c29dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014e125c0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00d6eb170, 0xc014c58780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99300)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99300)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99300)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99300)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99200)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00d6eb170, 0xc014d99200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc014d90ae0, 0xc0114144a0, 0x7374040, 0xc00d6eb170, 0xc014d99200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46762]
I0513 17:39:09.297082  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (929.971µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.298797  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.30015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.298972  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0513 17:39:09.300043  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (878.602µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.301772  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.22763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.301971  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0513 17:39:09.302905  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (726.293µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.304821  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.556498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.304981  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0513 17:39:09.306456  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.306493  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.306666  108115 wrap.go:47] GET /healthz: (967.557µs) 500
goroutine 30245 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc014d61b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc014d61b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014ead080, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc01112f820, 0xc014c58c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3500)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3500)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3400)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc01112f820, 0xc014ea3400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc014e85140, 0xc0114144a0, 0x7374040, 0xc01112f820, 0xc014ea3400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.307289  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (2.158536ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.310083  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.210429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.310520  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0513 17:39:09.311589  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (811.253µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.314340  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.228835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.314712  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0513 17:39:09.316921  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.910744ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.320050  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.267809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.320202  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0513 17:39:09.321246  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (904.73µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.323283  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.589242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.323558  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0513 17:39:09.324738  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (968.117µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.327007  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.314476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.329089  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0513 17:39:09.332436  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (3.170273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.334779  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.924808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.335152  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0513 17:39:09.336910  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.209559ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.338689  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.357616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.339292  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0513 17:39:09.342752  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (3.278645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.347311  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.955612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.347797  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0513 17:39:09.349154  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.042802ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.351801  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.075994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.352026  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0513 17:39:09.353498  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.140262ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.355790  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.758136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.356549  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0513 17:39:09.357519  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (741.3µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.360659  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.57515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.360828  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0513 17:39:09.362171  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.168311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.364444  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.899698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.364673  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0513 17:39:09.365920  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.113366ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.368160  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.810114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.368421  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0513 17:39:09.369663  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (876.545µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.371796  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.757762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.372176  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0513 17:39:09.373190  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (819.469µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.375061  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.472588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.375335  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0513 17:39:09.376314  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (807.299µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.378804  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.105843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.379172  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0513 17:39:09.380361  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (896.038µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.382450  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.654834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.382737  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0513 17:39:09.383742  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (845.406µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.386054  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.95391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.386243  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0513 17:39:09.387329  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (847.347µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.389247  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.516201ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.389567  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0513 17:39:09.390551  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (787.897µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.392369  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.388625ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.392606  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0513 17:39:09.394093  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (758.496µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.395879  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.32912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.396051  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0513 17:39:09.396477  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.396566  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.396874  108115 wrap.go:47] GET /healthz: (1.296852ms) 500
goroutine 30343 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015059420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015059420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0151ae0c0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0112ab5b0, 0xc004059e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167400)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167400)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167400)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167400)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167300)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0112ab5b0, 0xc015167300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0151962a0, 0xc0114144a0, 0x7374040, 0xc0112ab5b0, 0xc015167300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46762]
I0513 17:39:09.396987  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (779.101µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.398767  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.363005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.398964  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0513 17:39:09.400426  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.291235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.402732  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.924246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.403096  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0513 17:39:09.404445  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.124059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.406152  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.241901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.406315  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0513 17:39:09.406875  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.406901  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.407218  108115 wrap.go:47] GET /healthz: (1.536867ms) 500
goroutine 30327 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01501f2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01501f2d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01523c020, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc008f41540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1a00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1a00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1a00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1a00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1900)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00d6eb5e8, 0xc0150a1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0150fef60, 0xc0114144a0, 0x7374040, 0xc00d6eb5e8, 0xc0150a1900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.407343  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (767.914µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.408899  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.164586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.409131  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0513 17:39:09.410287  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (963.405µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.412134  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.283126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.412353  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0513 17:39:09.413322  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (777.062µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.414983  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.244137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.415160  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0513 17:39:09.416151  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (773.69µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.417741  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.283981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.417934  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0513 17:39:09.419067  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (917.183µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.420774  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.402686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.420984  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0513 17:39:09.421907  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (766.338µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.423640  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.353793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.423849  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0513 17:39:09.424802  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (772.796µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.426546  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.396804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.426775  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0513 17:39:09.427706  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (740.74µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.429386  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.260154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.429602  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0513 17:39:09.430461  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (731.965µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.432249  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.347962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.432478  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0513 17:39:09.435740  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (921.183µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.457086  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.246918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.457360  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0513 17:39:09.476316  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.470991ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.496758  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.496794  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.496943  108115 wrap.go:47] GET /healthz: (1.273993ms) 500
goroutine 30383 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0152aad20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0152aad20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01534b7c0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc011371968, 0xc0151e23c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc011371968, 0xc015355100)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc011371968, 0xc015355100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc011371968, 0xc015355100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc011371968, 0xc015355100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc011371968, 0xc015355100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc011371968, 0xc015355100)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc011371968, 0xc015355100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc011371968, 0xc015355100)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc011371968, 0xc015355100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc011371968, 0xc015355100)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc011371968, 0xc015355100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc011371968, 0xc015355000)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc011371968, 0xc015355000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01530d7a0, 0xc0114144a0, 0x7374040, 0xc011371968, 0xc015355000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46762]
I0513 17:39:09.498971  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.144509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.499253  108115 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0513 17:39:09.507134  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.507190  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.507582  108115 wrap.go:47] GET /healthz: (1.711311ms) 500
goroutine 30301 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0152cc230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0152cc230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0151c8ea0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc01112fed0, 0xc0151e2a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2e00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2e00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2e00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2e00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2d00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc01112fed0, 0xc0151c2d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01519fb60, 0xc0114144a0, 0x7374040, 0xc01112fed0, 0xc0151c2d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.516247  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.369078ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.538273  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.373703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.538621  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0513 17:39:09.556316  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.46398ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.579207  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.332583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.579531  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0513 17:39:09.596415  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.558949ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.596805  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.596880  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.597084  108115 wrap.go:47] GET /healthz: (1.385631ms) 500
goroutine 30385 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0152ab570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0152ab570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015392ae0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc011371ac8, 0xc0153f4280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee100)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee100)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee100)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee100)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee000)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc011371ac8, 0xc0153ee000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01530df20, 0xc0114144a0, 0x7374040, 0xc011371ac8, 0xc0153ee000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46762]
I0513 17:39:09.607119  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.607152  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.607338  108115 wrap.go:47] GET /healthz: (1.304861ms) 500
goroutine 30392 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015235650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015235650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015339140, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc014c4e968, 0xc0151e3040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc014c4e968, 0xc015327200)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc014c4e968, 0xc015327200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc014c4e968, 0xc015327200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc014c4e968, 0xc015327200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc014c4e968, 0xc015327200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc014c4e968, 0xc015327200)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc014c4e968, 0xc015327200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc014c4e968, 0xc015327200)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc014c4e968, 0xc015327200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc014c4e968, 0xc015327200)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc014c4e968, 0xc015327200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc014c4e968, 0xc015327100)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc014c4e968, 0xc015327100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015332c60, 0xc0114144a0, 0x7374040, 0xc014c4e968, 0xc015327100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.626267  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.763507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.626589  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0513 17:39:09.635852  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.060062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.666100  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.239358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.666367  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0513 17:39:09.675929  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.183681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.697416  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.601888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.698297  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0513 17:39:09.704549  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.704588  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.704844  108115 wrap.go:47] GET /healthz: (9.0826ms) 500
goroutine 30451 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc014c19650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc014c19650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015346500, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00dadf238, 0xc0151e3540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63800)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63800)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63800)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63800)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63700)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00dadf238, 0xc014f63700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc014f71800, 0xc0114144a0, 0x7374040, 0xc00dadf238, 0xc014f63700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46764]
I0513 17:39:09.720216  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (3.587276ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.741703  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.741739  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.741959  108115 wrap.go:47] GET /healthz: (36.123797ms) 500
goroutine 30089 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0149a96c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0149a96c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014c3c760, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0147c90c0, 0xc00b403a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486500)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486500)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486400)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0147c90c0, 0xc015486400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0149ffaa0, 0xc0114144a0, 0x7374040, 0xc0147c90c0, 0xc015486400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.749048  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.335046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.749536  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0513 17:39:09.756143  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.367582ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.777482  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.664147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.777739  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0513 17:39:09.797756  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.797785  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.797981  108115 wrap.go:47] GET /healthz: (2.332348ms) 500
goroutine 30400 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015235d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015235d50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0154aa4c0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc014c4ead8, 0xc00cbb8a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6500)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6500)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6400)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc014c4ead8, 0xc0154d6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015333860, 0xc0114144a0, 0x7374040, 0xc014c4ead8, 0xc0154d6400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46764]
I0513 17:39:09.800745  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (5.957369ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.827240  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (12.207284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.828141  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0513 17:39:09.828771  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.828837  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.829996  108115 wrap.go:47] GET /healthz: (24.187096ms) 500
goroutine 30456 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc014c19b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc014c19b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0153471a0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00dadf300, 0xc01551c280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00dadf300, 0xc015504100)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00dadf300, 0xc015504100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00dadf300, 0xc015504100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf300, 0xc015504100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf300, 0xc015504100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00dadf300, 0xc015504100)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00dadf300, 0xc015504100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00dadf300, 0xc015504100)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00dadf300, 0xc015504100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00dadf300, 0xc015504100)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00dadf300, 0xc015504100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00dadf300, 0xc015504000)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00dadf300, 0xc015504000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc014f71da0, 0xc0114144a0, 0x7374040, 0xc00dadf300, 0xc015504000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.853711  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (18.925409ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.864020  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.77011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.864277  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0513 17:39:09.885779  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (10.994273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.899582  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.822478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:09.899842  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0513 17:39:09.904938  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.904967  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.905151  108115 wrap.go:47] GET /healthz: (9.462475ms) 500
goroutine 30094 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0149a9b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0149a9b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc014c3d820, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0147c91f0, 0xc01551c8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487700)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487700)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487700)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487700)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487600)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0147c91f0, 0xc015487600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015564300, 0xc0114144a0, 0x7374040, 0xc0147c91f0, 0xc015487600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46764]
I0513 17:39:09.906549  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.906576  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.906716  108115 wrap.go:47] GET /healthz: (989.888µs) 500
goroutine 30485 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0154fe700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0154fe700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0154ab680, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc014c4ec10, 0xc0151e3b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7600)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7600)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7600)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7600)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7500)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc014c4ec10, 0xc0154d7500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0155481e0, 0xc0114144a0, 0x7374040, 0xc014c4ec10, 0xc0154d7500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.915979  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.128118ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.936868  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.061445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.937130  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0513 17:39:09.956319  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.477407ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.976850  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.031203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.977083  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0513 17:39:09.997078  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (2.269419ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:09.997249  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:09.997278  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:09.997462  108115 wrap.go:47] GET /healthz: (1.798109ms) 500
goroutine 30443 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015464d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015464d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01556e6a0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc011371e78, 0xc0155d8140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6b00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6b00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6b00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6b00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6a00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc011371e78, 0xc0155a6a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0153f1620, 0xc0114144a0, 0x7374040, 0xc011371e78, 0xc0155a6a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46762]
I0513 17:39:10.006794  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.006840  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.006990  108115 wrap.go:47] GET /healthz: (1.131609ms) 500
goroutine 30489 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0154fea80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0154fea80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0154abd60, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc014c4ec70, 0xc0155d8780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7f00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7f00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7f00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7f00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7e00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc014c4ec70, 0xc0154d7e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015548720, 0xc0114144a0, 0x7374040, 0xc014c4ec70, 0xc0154d7e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.016958  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.128813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.017169  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0513 17:39:10.036216  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.315638ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.057356  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.500894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.057784  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0513 17:39:10.076563  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.715811ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.096895  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.096933  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.097094  108115 wrap.go:47] GET /healthz: (1.433064ms) 500
goroutine 30501 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01562a380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01562a380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0156385c0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc015658280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfc00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfc00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfc00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfc00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfb00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00d6ebdd0, 0xc0155cfb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01559f800, 0xc0114144a0, 0x7374040, 0xc00d6ebdd0, 0xc0155cfb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46764]
I0513 17:39:10.097151  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.342027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.097351  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0513 17:39:10.107006  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.107039  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.107193  108115 wrap.go:47] GET /healthz: (1.079601ms) 500
goroutine 30503 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01562a460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01562a460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0156387e0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00d6ebde0, 0xc0155d8dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00d6ebde0, 0xc015674000)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00d6ebde0, 0xc015674000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00d6ebde0, 0xc015674000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00d6ebde0, 0xc015674000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00d6ebde0, 0xc015674000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00d6ebde0, 0xc015674000)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00d6ebde0, 0xc015674000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00d6ebde0, 0xc015674000)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00d6ebde0, 0xc015674000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00d6ebde0, 0xc015674000)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00d6ebde0, 0xc015674000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00d6ebde0, 0xc0155cff00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00d6ebde0, 0xc0155cff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01559faa0, 0xc0114144a0, 0x7374040, 0xc00d6ebde0, 0xc0155cff00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.115882  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.092704ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.136532  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.729498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.136768  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0513 17:39:10.156226  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.41089ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.176853  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.024103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.177096  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0513 17:39:10.195982  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.113288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.196419  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.196452  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.196667  108115 wrap.go:47] GET /healthz: (993.813µs) 500
goroutine 30465 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01555c9a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01555c9a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01555bb20, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00dadf588, 0xc00cbb9180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccd00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccd00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccd00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccd00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccc00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00dadf588, 0xc0156ccc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015539b60, 0xc0114144a0, 0x7374040, 0xc00dadf588, 0xc0156ccc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46762]
I0513 17:39:10.206755  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.206790  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.206993  108115 wrap.go:47] GET /healthz: (1.069919ms) 500
goroutine 30531 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01555ca80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01555ca80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0156ee0a0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00dadf5a0, 0xc01551cf00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd100)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd100)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd100)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd100)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd000)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00dadf5a0, 0xc0156cd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0156d8120, 0xc0114144a0, 0x7374040, 0xc00dadf5a0, 0xc0156cd000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.217344  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.442653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.217642  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0513 17:39:10.236397  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.39139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.257907  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.112727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.259162  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0513 17:39:10.276043  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.263959ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.296997  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.190663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.297278  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0513 17:39:10.298392  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.298430  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.298654  108115 wrap.go:47] GET /healthz: (2.968129ms) 500
goroutine 30522 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0157260e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0157260e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0156bf300, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0156aa180, 0xc0153f4b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb300)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb300)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb300)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb300)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb200)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0156aa180, 0xc0156bb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015691080, 0xc0114144a0, 0x7374040, 0xc0156aa180, 0xc0156bb200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46764]
I0513 17:39:10.307019  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.307059  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.307258  108115 wrap.go:47] GET /healthz: (1.359844ms) 500
goroutine 30538 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01555d570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01555d570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0156efe20, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00dadf718, 0xc00cbb9680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00dadf718, 0xc015744500)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00dadf718, 0xc015744500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00dadf718, 0xc015744500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf718, 0xc015744500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf718, 0xc015744500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00dadf718, 0xc015744500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00dadf718, 0xc015744500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00dadf718, 0xc015744500)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00dadf718, 0xc015744500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00dadf718, 0xc015744500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00dadf718, 0xc015744500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00dadf718, 0xc015744400)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00dadf718, 0xc015744400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0156d8a80, 0xc0114144a0, 0x7374040, 0xc00dadf718, 0xc015744400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.316432  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.631447ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.337001  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.05994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.337353  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0513 17:39:10.356187  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.33611ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.377320  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.492637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.377592  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0513 17:39:10.396401  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.485897ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.396630  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.396651  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.396815  108115 wrap.go:47] GET /healthz: (1.12812ms) 500
goroutine 30572 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015738620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015738620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015621860, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0147c93a0, 0xc0155d9400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0147c93a0, 0xc0157b4000)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0147c93a0, 0xc0157b4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0147c93a0, 0xc0157b4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c93a0, 0xc0157b4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c93a0, 0xc0157b4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0147c93a0, 0xc0157b4000)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0147c93a0, 0xc0157b4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0147c93a0, 0xc0157b4000)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0147c93a0, 0xc0157b4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0147c93a0, 0xc0157b4000)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0147c93a0, 0xc0157b4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0147c93a0, 0xc0156edf00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0147c93a0, 0xc0156edf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015565c20, 0xc0114144a0, 0x7374040, 0xc0147c93a0, 0xc0156edf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46762]
I0513 17:39:10.406895  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.406934  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.407089  108115 wrap.go:47] GET /healthz: (1.239639ms) 500
goroutine 30558 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015798460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015798460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0156aff40, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc014c4f138, 0xc00cbb9b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ee00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ee00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ee00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ee00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ed00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc014c4f138, 0xc01578ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0157967e0, 0xc0114144a0, 0x7374040, 0xc014c4f138, 0xc01578ed00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.416982  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.136805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.417295  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0513 17:39:10.436400  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.63887ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.465234  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.605478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.465577  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0513 17:39:10.475928  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.180673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.497078  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.265562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.497322  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0513 17:39:10.498594  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.498617  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.498785  108115 wrap.go:47] GET /healthz: (1.09382ms) 500
goroutine 30645 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0152cd420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0152cd420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0157f4880, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0153d62b8, 0xc0155d9a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c400)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c400)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c400)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c400)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c300)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0153d62b8, 0xc01581c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0153b9680, 0xc0114144a0, 0x7374040, 0xc0153d62b8, 0xc01581c300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46764]
I0513 17:39:10.506697  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.506738  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.506880  108115 wrap.go:47] GET /healthz: (1.096929ms) 500
goroutine 30647 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0152cd500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0152cd500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0157f4ac0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0153d62e8, 0xc014c59680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581cb00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581cb00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581cb00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581cb00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581ca00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0153d62e8, 0xc01581ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0153b99e0, 0xc0114144a0, 0x7374040, 0xc0153d62e8, 0xc01581ca00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.516624  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.142735ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.537235  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.350386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.537489  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0513 17:39:10.557965  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.784206ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.577356  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.523411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.577639  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0513 17:39:10.596184  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.267864ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.597031  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.597068  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.597217  108115 wrap.go:47] GET /healthz: (1.446471ms) 500
goroutine 30634 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015727500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015727500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015801f80, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0156aa450, 0xc014c59cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0156aa450, 0xc01585cf00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0156aa450, 0xc01585cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0156aa450, 0xc01585cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0156aa450, 0xc01585cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0156aa450, 0xc01585cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0156aa450, 0xc01585cf00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0156aa450, 0xc01585cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0156aa450, 0xc01585cf00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0156aa450, 0xc01585cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0156aa450, 0xc01585cf00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0156aa450, 0xc01585cf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0156aa450, 0xc01585ce00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0156aa450, 0xc01585ce00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015882000, 0xc0114144a0, 0x7374040, 0xc0156aa450, 0xc01585ce00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46762]
I0513 17:39:10.606920  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.606983  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.607144  108115 wrap.go:47] GET /healthz: (1.226749ms) 500
goroutine 30658 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01562bd50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01562bd50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0158666c0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0157b2240, 0xc0158a8280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0157b2240, 0xc01580db00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0157b2240, 0xc01580db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0157b2240, 0xc01580db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0157b2240, 0xc01580db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0157b2240, 0xc01580db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0157b2240, 0xc01580db00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0157b2240, 0xc01580db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0157b2240, 0xc01580db00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0157b2240, 0xc01580db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0157b2240, 0xc01580db00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0157b2240, 0xc01580db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0157b2240, 0xc01580da00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0157b2240, 0xc01580da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0156c5a40, 0xc0114144a0, 0x7374040, 0xc0157b2240, 0xc01580da00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.616890  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.817117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.617126  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0513 17:39:10.636323  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.394651ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.656776  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.006775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.657039  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0513 17:39:10.676213  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.366408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.696806  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.696889  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.697081  108115 wrap.go:47] GET /healthz: (1.11385ms) 500
goroutine 30638 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015727c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015727c00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015884c00, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0156aa500, 0xc01591e140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0156aa500, 0xc015916000)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0156aa500, 0xc015916000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0156aa500, 0xc015916000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0156aa500, 0xc015916000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0156aa500, 0xc015916000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0156aa500, 0xc015916000)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0156aa500, 0xc015916000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0156aa500, 0xc015916000)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0156aa500, 0xc015916000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0156aa500, 0xc015916000)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0156aa500, 0xc015916000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0156aa500, 0xc01585df00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0156aa500, 0xc01585df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015882720, 0xc0114144a0, 0x7374040, 0xc0156aa500, 0xc01585df00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46764]
I0513 17:39:10.697562  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.723765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.697820  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0513 17:39:10.706785  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.706819  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.707010  108115 wrap.go:47] GET /healthz: (1.133625ms) 500
goroutine 30607 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015798fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015798fc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01593e0e0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc014c4f358, 0xc015658f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc014c4f358, 0xc015912700)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc014c4f358, 0xc015912700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc014c4f358, 0xc015912700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc014c4f358, 0xc015912700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc014c4f358, 0xc015912700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc014c4f358, 0xc015912700)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc014c4f358, 0xc015912700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc014c4f358, 0xc015912700)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc014c4f358, 0xc015912700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc014c4f358, 0xc015912700)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc014c4f358, 0xc015912700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc014c4f358, 0xc015912600)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc014c4f358, 0xc015912600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015797b00, 0xc0114144a0, 0x7374040, 0xc014c4f358, 0xc015912600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.716155  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.366254ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.736926  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.058422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.737182  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0513 17:39:10.756126  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.262554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.777263  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.38071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.777660  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0513 17:39:10.796079  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.262142ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:10.796521  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.796552  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.796705  108115 wrap.go:47] GET /healthz: (958.12µs) 500
goroutine 30665 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0158e4930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0158e4930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00daa4c40, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0157b2408, 0xc0158a88c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb100)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb100)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb100)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb100)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb000)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0157b2408, 0xc0158eb000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01595cae0, 0xc0114144a0, 0x7374040, 0xc0157b2408, 0xc0158eb000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46764]
I0513 17:39:10.806900  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.806940  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.807127  108115 wrap.go:47] GET /healthz: (1.26831ms) 500
goroutine 30678 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0158fa930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0158fa930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01597c320, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0153d6690, 0xc01599c000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5f00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5f00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5f00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5f00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5e00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0153d6690, 0xc0158d5e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0158816e0, 0xc0114144a0, 0x7374040, 0xc0153d6690, 0xc0158d5e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.817060  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.242643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.817341  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0513 17:39:10.836331  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.454442ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.856994  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.155997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.857434  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0513 17:39:10.877038  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.382383ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.896933  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.897018  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.897211  108115 wrap.go:47] GET /healthz: (1.457287ms) 500
goroutine 30693 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01595a7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01595a7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015885ee0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0156aa680, 0xc0158a8dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0156aa680, 0xc015917200)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0156aa680, 0xc015917200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0156aa680, 0xc015917200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0156aa680, 0xc015917200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0156aa680, 0xc015917200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0156aa680, 0xc015917200)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0156aa680, 0xc015917200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0156aa680, 0xc015917200)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0156aa680, 0xc015917200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0156aa680, 0xc015917200)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0156aa680, 0xc015917200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0156aa680, 0xc015917100)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0156aa680, 0xc015917100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0158837a0, 0xc0114144a0, 0x7374040, 0xc0156aa680, 0xc015917100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46762]
I0513 17:39:10.897219  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.306106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.897596  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0513 17:39:10.906968  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.907004  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.907169  108115 wrap.go:47] GET /healthz: (1.243226ms) 500
goroutine 30670 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0158e4d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0158e4d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00daa5900, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0157b24c8, 0xc01591e8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158ebb00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158ebb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158ebb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158ebb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158ebb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158ebb00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158ebb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158ebb00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158ebb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158ebb00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158ebb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158eba00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0157b24c8, 0xc0158eba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01595d380, 0xc0114144a0, 0x7374040, 0xc0157b24c8, 0xc0158eba00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.916766  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.543386ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.937120  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.054738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.937365  108115 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0513 17:39:10.956102  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.293796ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.958417  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.62504ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.976913  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.082867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.977334  108115 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0513 17:39:10.996180  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.35735ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:10.996613  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:10.996650  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:10.996827  108115 wrap.go:47] GET /healthz: (1.180029ms) 500
goroutine 30726 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0158e5880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0158e5880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015a231c0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0157b25e0, 0xc01591ef00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d500)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d500)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d400)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0157b25e0, 0xc015a1d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015a4e0c0, 0xc0114144a0, 0x7374040, 0xc0157b25e0, 0xc015a1d400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46762]
I0513 17:39:10.998168  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.269769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:11.006938  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:11.006976  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:11.007187  108115 wrap.go:47] GET /healthz: (1.227776ms) 500
goroutine 30695 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01595acb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01595acb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0159e66c0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015659680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917700)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917700)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917700)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917700)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917600)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0156aa6e8, 0xc015917600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015883a40, 0xc0114144a0, 0x7374040, 0xc0156aa6e8, 0xc015917600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:11.016703  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.883318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:11.016952  108115 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0513 17:39:11.036343  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.448792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:11.038238  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.358046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:11.057150  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.301009ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:11.057437  108115 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0513 17:39:11.076396  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.522207ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:11.078738  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.746319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:11.096970  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.182808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:11.097064  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:11.097092  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:11.097253  108115 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0513 17:39:11.097267  108115 wrap.go:47] GET /healthz: (1.585616ms) 500
goroutine 30757 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0158fbe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0158fbe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015a0dc40, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0153d69c8, 0xc01551d540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8c00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8c00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8c00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8c00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8b00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0153d69c8, 0xc015ac8b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0159e0e40, 0xc0114144a0, 0x7374040, 0xc0153d69c8, 0xc015ac8b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46762]
I0513 17:39:11.107392  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:11.107434  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:11.107795  108115 wrap.go:47] GET /healthz: (1.927026ms) 500
goroutine 30613 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01555d9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01555d9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015791040, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00dadf828, 0xc015659cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00dadf828, 0xc015745500)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00dadf828, 0xc015745500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00dadf828, 0xc015745500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf828, 0xc015745500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf828, 0xc015745500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00dadf828, 0xc015745500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00dadf828, 0xc015745500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00dadf828, 0xc015745500)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00dadf828, 0xc015745500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00dadf828, 0xc015745500)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00dadf828, 0xc015745500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00dadf828, 0xc015745400)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00dadf828, 0xc015745400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0156d9a40, 0xc0114144a0, 0x7374040, 0xc00dadf828, 0xc015745400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.116194  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.373068ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.118163  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.431111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.136965  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.150644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.137215  108115 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0513 17:39:11.156238  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.416126ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.158155  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.317571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.176891  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.037741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.177439  108115 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0513 17:39:11.196295  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.556406ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.196379  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:11.196412  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:11.196626  108115 wrap.go:47] GET /healthz: (1.013334ms) 500
goroutine 30748 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015b802a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015b802a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015aff6a0, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0147c9818, 0xc015b3a280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73100)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73100)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73100)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73100)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73000)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0147c9818, 0xc015b73000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0159bf920, 0xc0114144a0, 0x7374040, 0xc0147c9818, 0xc015b73000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46764]
I0513 17:39:11.197948  108115 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.17485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.207008  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:11.207078  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:11.210361  108115 wrap.go:47] GET /healthz: (3.864979ms) 500
goroutine 30622 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015b5a460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015b5a460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015b6ab40, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc00dadf988, 0xc015bb4280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50d00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50d00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50d00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50d00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50c00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc00dadf988, 0xc015b50c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc015b40960, 0xc0114144a0, 0x7374040, 0xc00dadf988, 0xc015b50c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.238563  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (21.042106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.239017  108115 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0513 17:39:11.247462  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (7.62577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.250425  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.22851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.258011  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.101673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.258292  108115 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0513 17:39:11.294636  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (3.474426ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.299140  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:11.299186  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:11.299386  108115 wrap.go:47] GET /healthz: (3.6045ms) 500
goroutine 30718 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015e802a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015e802a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc015aeed40, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc01536c7c0, 0xc015b3aa00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7cb00)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7cb00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7cb00)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7cb00)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7ca00)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc01536c7c0, 0xc015e7ca00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0157fbc80, 0xc0114144a0, 0x7374040, 0xc01536c7c0, 0xc015e7ca00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46764]
I0513 17:39:11.299717  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (4.539635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.308026  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (7.704159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.311544  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:11.311571  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:11.312098  108115 wrap.go:47] GET /healthz: (1.485751ms) 500
goroutine 30787 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015798000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015798000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01593e080, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc01536c000, 0xc01599c280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6300)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6300)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6300)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6300)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6200)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc01536c000, 0xc0114e6200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0114b8120, 0xc0114144a0, 0x7374040, 0xc01536c000, 0xc0114e6200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.312148  108115 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0513 17:39:11.316138  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.453303ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.318219  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.408852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.337110  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.244963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.337400  108115 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0513 17:39:11.356067  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.23899ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.357795  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.25767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.377674  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.988964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.377937  108115 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0513 17:39:11.396288  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.530574ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.396558  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:11.396587  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:11.396829  108115 wrap.go:47] GET /healthz: (1.182478ms) 500
goroutine 30794 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0157988c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0157988c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01593f120, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc01536c0f0, 0xc015bb4780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7400)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7400)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7400)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7400)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7300)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc01536c0f0, 0xc0114e7300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0114b8ae0, 0xc0114144a0, 0x7374040, 0xc01536c0f0, 0xc0114e7300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:46764]
I0513 17:39:11.398053  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.306384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.406618  108115 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 17:39:11.406649  108115 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 17:39:11.406832  108115 wrap.go:47] GET /healthz: (991.086µs) 500
goroutine 30808 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01434d180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01434d180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0144c1f40, 0x1f4)
net/http.Error(0x7fe87b13ddb0, 0xc0145bc598, 0xc00298adc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8600)
net/http.HandlerFunc.ServeHTTP(0xc014851b20, 0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0148a8400, 0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc006d06cb0, 0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc012416e10, 0xc006d06cb0, 0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8600)
net/http.HandlerFunc.ServeHTTP(0xc011b9d240, 0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8600)
net/http.HandlerFunc.ServeHTTP(0xc010d9eba0, 0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8600)
net/http.HandlerFunc.ServeHTTP(0xc011b9d280, 0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8500)
net/http.HandlerFunc.ServeHTTP(0xc01249f040, 0x7fe87b13ddb0, 0xc0145bc598, 0xc0110e8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01457b320, 0xc0114144a0, 0x7374040, 0xc0145bc598, 0xc0110e8500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.416455  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.629116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.416737  108115 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0513 17:39:11.436021  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.199916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.437967  108115 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.377044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.456807  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.971683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.457067  108115 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0513 17:39:11.476411  108115 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.574905ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.478217  108115 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.220316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.496711  108115 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.857003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.496964  108115 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0513 17:39:11.497672  108115 wrap.go:47] GET /healthz: (2.033494ms) 200 [Go-http-client/1.1 127.0.0.1:46764]
W0513 17:39:11.498439  108115 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 17:39:11.498517  108115 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 17:39:11.498549  108115 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 17:39:11.498565  108115 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 17:39:11.498583  108115 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 17:39:11.498594  108115 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 17:39:11.498604  108115 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 17:39:11.498616  108115 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 17:39:11.498628  108115 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 17:39:11.498646  108115 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0513 17:39:11.498717  108115 factory.go:337] Creating scheduler from algorithm provider 'DefaultProvider'
I0513 17:39:11.498731  108115 factory.go:418] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0513 17:39:11.498956  108115 controller_utils.go:1029] Waiting for caches to sync for scheduler controller
I0513 17:39:11.499183  108115 reflector.go:122] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:209
I0513 17:39:11.499203  108115 reflector.go:160] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:209
I0513 17:39:11.500034  108115 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (590.315µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46764]
I0513 17:39:11.500879  108115 get.go:250] Starting watch for /api/v1/pods, rv=23813 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=7m25s
I0513 17:39:11.506834  108115 wrap.go:47] GET /healthz: (969.074µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.508160  108115 wrap.go:47] GET /api/v1/namespaces/default: (982.422µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.510432  108115 wrap.go:47] POST /api/v1/namespaces: (1.745666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.512000  108115 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.211463ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.515674  108115 wrap.go:47] POST /api/v1/namespaces/default/services: (3.314708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.517139  108115 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.097198ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.519097  108115 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.523669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.599121  108115 shared_informer.go:175] caches populated
I0513 17:39:11.599175  108115 controller_utils.go:1036] Caches are synced for scheduler controller
I0513 17:39:11.599659  108115 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.599685  108115 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.599764  108115 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.599792  108115 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.599659  108115 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.599941  108115 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.600267  108115 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.600283  108115 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.600338  108115 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.600355  108115 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.600699  108115 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.600715  108115 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.601821  108115 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (739.079µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.601881  108115 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.601898  108115 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.602421  108115 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (379.559µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47092]
I0513 17:39:11.602669  108115 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=23813 labels= fields= timeout=9m33s
I0513 17:39:11.603104  108115 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (384.411µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47090]
I0513 17:39:11.603313  108115 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (384.089µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46762]
I0513 17:39:11.603757  108115 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=23813 labels= fields= timeout=6m40s
I0513 17:39:11.604085  108115 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=23813 labels= fields= timeout=8m29s
I0513 17:39:11.604104  108115 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=23813 labels= fields= timeout=8m48s
I0513 17:39:11.604760  108115 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (433.579µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47094]
I0513 17:39:11.604965  108115 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (3.562082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47088]
I0513 17:39:11.605361  108115 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=23813 labels= fields= timeout=7m32s
I0513 17:39:11.605787  108115 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=23813 labels= fields= timeout=7m47s
I0513 17:39:11.606675  108115 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (4.24309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47098]
I0513 17:39:11.607012  108115 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.607033  108115 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.607333  108115 get.go:250] Starting watch for /api/v1/services, rv=24160 labels= fields= timeout=7m50s
I0513 17:39:11.607789  108115 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (505.219µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47100]
I0513 17:39:11.608574  108115 get.go:250] Starting watch for /api/v1/nodes, rv=23813 labels= fields= timeout=5m43s
I0513 17:39:11.608935  108115 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.608980  108115 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0513 17:39:11.609847  108115 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (428.174µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47102]
I0513 17:39:11.610444  108115 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=23813 labels= fields= timeout=8m54s
I0513 17:39:11.699462  108115 shared_informer.go:175] caches populated
I0513 17:39:11.799781  108115 shared_informer.go:175] caches populated
I0513 17:39:11.900033  108115 shared_informer.go:175] caches populated
I0513 17:39:12.000302  108115 shared_informer.go:175] caches populated
I0513 17:39:12.100518  108115 shared_informer.go:175] caches populated
I0513 17:39:12.200942  108115 shared_informer.go:175] caches populated
I0513 17:39:12.301096  108115 shared_informer.go:175] caches populated
I0513 17:39:12.401656  108115 shared_informer.go:175] caches populated
I0513 17:39:12.501892  108115 shared_informer.go:175] caches populated
I0513 17:39:12.602106  108115 shared_informer.go:175] caches populated
I0513 17:39:12.602410  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:12.603970  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:12.605194  108115 wrap.go:47] POST /api/v1/nodes: (2.278318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47132]
I0513 17:39:12.605669  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:12.607239  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:12.607433  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.791948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47132]
I0513 17:39:12.607870  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0
I0513 17:39:12.607894  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0
I0513 17:39:12.608023  108115 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0", node "node1"
I0513 17:39:12.608043  108115 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0", node "node1": all PVCs bound and nothing to do
I0513 17:39:12.608089  108115 factory.go:711] Attempting to bind rpod-0 to node1
I0513 17:39:12.608241  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:12.609543  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.572259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47132]
I0513 17:39:12.609768  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1
I0513 17:39:12.609780  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1
I0513 17:39:12.609874  108115 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1", node "node1"
I0513 17:39:12.609887  108115 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1", node "node1": all PVCs bound and nothing to do
I0513 17:39:12.609923  108115 factory.go:711] Attempting to bind rpod-1 to node1
I0513 17:39:12.610119  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0/binding: (1.427521ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0513 17:39:12.610280  108115 scheduler.go:570] pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 17:39:12.613393  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1/binding: (3.265301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47132]
I0513 17:39:12.614063  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.976623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0513 17:39:12.614587  108115 scheduler.go:570] pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 17:39:12.621755  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.898081ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47132]
I0513 17:39:12.712661  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (2.399553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47132]
I0513 17:39:12.815788  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1: (1.813721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47132]
I0513 17:39:12.816269  108115 preemption_test.go:561] Creating the preemptor pod...
I0513 17:39:12.818639  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47132]
I0513 17:39:12.818793  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:12.818809  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:12.818828  108115 preemption_test.go:567] Creating additional pods...
I0513 17:39:12.818916  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.818964  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.821875  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.430065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0513 17:39:12.822333  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.464693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47138]
I0513 17:39:12.822408  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.395709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47132]
I0513 17:39:12.822684  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod/status: (2.867763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0513 17:39:12.824822  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.588169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47134]
I0513 17:39:12.824833  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.950729ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47138]
I0513 17:39:12.825113  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0513 17:39:12.825213  108115 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0513 17:39:12.825219  108115 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0513 17:39:12.827080  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod/status: (1.573339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0513 17:39:12.827225  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.019992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47138]
I0513 17:39:12.829123  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.288025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0513 17:39:12.830859  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.417115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0513 17:39:12.832099  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (4.591809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47138]
I0513 17:39:12.833127  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.74961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0513 17:39:12.833239  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:12.833252  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:12.833361  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.833397  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.834265  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.801472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47138]
I0513 17:39:12.835643  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.138013ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47142]
I0513 17:39:12.835996  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (1.795488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47140]
I0513 17:39:12.836241  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0/status: (2.309046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47136]
I0513 17:39:12.836890  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.511365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47144]
I0513 17:39:12.837222  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.20337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47142]
I0513 17:39:12.837643  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (845.278µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47140]
I0513 17:39:12.837843  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.838086  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:12.838105  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:12.838221  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.838271  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.838876  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.263878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47144]
I0513 17:39:12.841063  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.062588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47146]
I0513 17:39:12.841490  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.108329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47144]
I0513 17:39:12.841494  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1/status: (3.029718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47140]
I0513 17:39:12.843294  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (1.311365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47140]
I0513 17:39:12.843549  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.843696  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:12.843711  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:12.843803  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (1.437254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47138]
I0513 17:39:12.843827  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.843862  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
E0513 17:39:12.844458  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:12.846421  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.08721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47140]
I0513 17:39:12.846444  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.850743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47138]
I0513 17:39:12.846771  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2/status: (2.174187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47148]
I0513 17:39:12.847150  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (2.934739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47146]
E0513 17:39:12.848096  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:12.848577  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.345554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47138]
I0513 17:39:12.848918  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (1.059896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47148]
I0513 17:39:12.849549  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.849716  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:12.849743  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:12.849852  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.849896  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.851918  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.348213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47146]
I0513 17:39:12.852593  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (2.137452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0513 17:39:12.852738  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.240512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47154]
I0513 17:39:12.852809  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3/status: (2.681425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0513 17:39:12.854104  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (1.004408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0513 17:39:12.854424  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.854667  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:12.854720  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:12.854820  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.854869  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.856037  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.736816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47146]
I0513 17:39:12.857419  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.794056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0513 17:39:12.857706  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4/status: (2.033193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0513 17:39:12.858762  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.791924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47156]
I0513 17:39:12.859407  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.402999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47150]
I0513 17:39:12.859573  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.334397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47146]
I0513 17:39:12.859683  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.872707  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:12.872740  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:12.872889  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.872980  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.874186  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.621811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47156]
I0513 17:39:12.876261  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.588623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
I0513 17:39:12.876483  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5/status: (3.148635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0513 17:39:12.878399  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.945465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47156]
I0513 17:39:12.878408  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (1.965144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47160]
E0513 17:39:12.878646  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:12.878922  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (1.987083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0513 17:39:12.879098  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.879567  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:12.879584  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:12.879699  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.879735  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.882878  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.222165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47162]
I0513 17:39:12.883712  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (3.462004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
I0513 17:39:12.884323  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6/status: (4.056187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47152]
I0513 17:39:12.885613  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (6.80169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47160]
I0513 17:39:12.887116  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (2.247032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
I0513 17:39:12.887578  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.888334  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.197534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47160]
I0513 17:39:12.889138  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:12.889152  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:12.889305  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.889352  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.892767  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (1.060866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47164]
I0513 17:39:12.893035  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7/status: (2.216502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47162]
E0513 17:39:12.893800  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:12.894820  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (1.068091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47162]
I0513 17:39:12.895052  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.895618  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (5.439469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47166]
I0513 17:39:12.895974  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:12.896000  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:12.896119  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.896172  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.898829  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8/status: (2.197197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47162]
I0513 17:39:12.899093  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (2.662398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47164]
I0513 17:39:12.899709  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.779107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
E0513 17:39:12.899917  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:12.900756  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (1.075104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47162]
I0513 17:39:12.901023  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.901189  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:12.901205  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:12.901344  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.901382  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.903890  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (2.328222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0513 17:39:12.904085  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-1.159e4eca84fc5803: (1.954904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47170]
I0513 17:39:12.904143  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (2.279222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47164]
I0513 17:39:12.904329  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.904835  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:12.904890  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:12.905018  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.905104  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.907435  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (2.062056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0513 17:39:12.907530  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9/status: (1.820973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47170]
I0513 17:39:12.909284  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (1.257815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47170]
I0513 17:39:12.909546  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.615362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47172]
I0513 17:39:12.909546  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.909810  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:12.909826  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:12.909922  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.909957  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.912813  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (2.070359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47168]
I0513 17:39:12.912965  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-2.159e4eca85519e04: (2.226769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47174]
I0513 17:39:12.913205  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (2.204409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47170]
I0513 17:39:12.913487  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.914034  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:12.914067  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:12.914165  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.914208  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.915532  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (1.072897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47170]
I0513 17:39:12.916867  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10/status: (2.208938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47174]
I0513 17:39:12.918305  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.415456ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47176]
I0513 17:39:12.919236  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (1.321042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47174]
I0513 17:39:12.919838  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.920145  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:12.920171  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:12.920418  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.920499  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.923240  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (22.499264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
I0513 17:39:12.925523  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.160287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47178]
I0513 17:39:12.927950  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11/status: (5.772228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47176]
I0513 17:39:12.928217  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (6.316191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47170]
I0513 17:39:12.930146  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (6.302924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
I0513 17:39:12.931041  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (2.245268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47170]
I0513 17:39:12.932295  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.932457  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:12.932489  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:12.932592  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.932643  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.936392  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.859341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47180]
I0513 17:39:12.938218  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12/status: (4.625891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47170]
I0513 17:39:12.939365  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (7.810164ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
I0513 17:39:12.939981  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (1.143887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47170]
I0513 17:39:12.940263  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.940486  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (7.216104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47178]
I0513 17:39:12.941657  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:12.941710  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:12.941858  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.941920  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.941999  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.029841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
E0513 17:39:12.941710  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:12.944554  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (1.446409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
I0513 17:39:12.946325  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.479632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47180]
I0513 17:39:12.946754  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.702293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47182]
I0513 17:39:12.948907  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.693971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47180]
I0513 17:39:12.951108  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.820287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47180]
I0513 17:39:12.952292  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13/status: (10.10841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47170]
I0513 17:39:12.955057  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.89204ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47180]
I0513 17:39:12.955225  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (2.360272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47170]
I0513 17:39:12.956008  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.956310  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:12.956368  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:12.956543  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.956639  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.958658  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.71089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
I0513 17:39:12.961360  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (1.929271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
I0513 17:39:12.961780  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (5.721112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47180]
I0513 17:39:12.962154  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14/status: (4.849991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47184]
E0513 17:39:12.963689  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:12.966285  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (2.352239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47184]
I0513 17:39:12.966664  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.862364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47180]
I0513 17:39:12.967281  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.967531  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15
I0513 17:39:12.967572  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15
I0513 17:39:12.967697  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.967772  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.972020  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.239117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47188]
I0513 17:39:12.973632  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (3.664655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47186]
I0513 17:39:12.974224  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.310603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
I0513 17:39:12.974491  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15/status: (4.24395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47184]
I0513 17:39:12.979423  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (2.822273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47186]
I0513 17:39:12.979866  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.980110  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:12.980127  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:12.980218  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.980265  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.980357  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (4.355739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
I0513 17:39:12.982360  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (1.779219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47188]
I0513 17:39:12.987531  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-5.159e4eca870debe9: (5.646984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47158]
I0513 17:39:12.987833  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (6.773123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47186]
I0513 17:39:12.988123  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.988343  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:12.988356  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:12.988439  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.988490  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:12.989955  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (8.459934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I0513 17:39:12.992211  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.022734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47192]
I0513 17:39:12.992241  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16/status: (3.518692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47186]
I0513 17:39:12.992413  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (3.266906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47188]
I0513 17:39:12.993541  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.998313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
E0513 17:39:12.994600  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:12.996210  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (1.505095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47186]
I0513 17:39:12.997072  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:12.997360  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:12.997413  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:12.997609  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:12.997691  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.001332  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.888543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47194]
I0513 17:39:13.003419  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17/status: (4.521854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47186]
I0513 17:39:13.003792  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (7.484796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47190]
I0513 17:39:13.004147  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (5.572923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47192]
E0513 17:39:13.005188  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:13.007639  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (1.722934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47194]
I0513 17:39:13.007970  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.398873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47192]
I0513 17:39:13.009492  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.009715  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:13.009733  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:13.009857  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.009899  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.010181  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.581677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47192]
I0513 17:39:13.013277  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (1.548128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47196]
I0513 17:39:13.013537  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.284358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47192]
I0513 17:39:13.013774  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.615685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47198]
I0513 17:39:13.013845  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18/status: (3.27589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47194]
E0513 17:39:13.014681  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:13.017755  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.781805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47196]
I0513 17:39:13.017891  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (3.03513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47192]
I0513 17:39:13.018653  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.019214  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:13.019236  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:13.019421  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.019551  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.020084  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.734235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47196]
I0513 17:39:13.022239  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.819951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47196]
I0513 17:39:13.023828  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-7.159e4eca8807bf0e: (2.115048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47200]
I0513 17:39:13.026850  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (3.906976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47196]
I0513 17:39:13.027070  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.650998ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47202]
I0513 17:39:13.027325  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (5.794324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47192]
I0513 17:39:13.027598  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.027781  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:13.027825  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:13.027966  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.028040  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.029523  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (1.269239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47202]
I0513 17:39:13.029874  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.030252  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19
I0513 17:39:13.030281  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19
I0513 17:39:13.030386  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.030423  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.030636  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-8.159e4eca886fa5b1: (1.8086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47206]
I0513 17:39:13.032950  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (3.955275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47200]
I0513 17:39:13.033204  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19/status: (2.37921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47202]
I0513 17:39:13.032950  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (1.892051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47206]
E0513 17:39:13.033750  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:13.035285  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.778815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47210]
I0513 17:39:13.035654  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (1.874337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47200]
I0513 17:39:13.036499  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.036853  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.691922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47206]
I0513 17:39:13.037056  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:13.037070  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:13.037162  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.037197  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.039400  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.489332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47212]
I0513 17:39:13.039875  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (2.284347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47206]
I0513 17:39:13.040201  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20/status: (2.576008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47200]
I0513 17:39:13.040295  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.21221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47208]
I0513 17:39:13.041584  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (980.2µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47206]
I0513 17:39:13.042356  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.042541  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:13.042557  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:13.042662  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.042682  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.985185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47212]
I0513 17:39:13.042696  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.045683  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (2.380198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47212]
I0513 17:39:13.045755  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-12.159e4eca8a9c491b: (2.359569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47216]
I0513 17:39:13.046227  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.935906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47214]
I0513 17:39:13.046431  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (3.424615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47206]
I0513 17:39:13.046665  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.046860  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:13.046875  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:13.046960  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.046998  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.048483  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (933.528µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47218]
I0513 17:39:13.048882  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.331529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47220]
I0513 17:39:13.049632  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21/status: (2.405656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47216]
I0513 17:39:13.049881  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.264555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47212]
I0513 17:39:13.051666  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (1.668715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47220]
I0513 17:39:13.051830  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.596022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47212]
I0513 17:39:13.051882  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.052334  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22
I0513 17:39:13.052352  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22
I0513 17:39:13.052445  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.052498  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.054999  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.5988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47224]
I0513 17:39:13.056227  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.864994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47220]
I0513 17:39:13.058958  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.381804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47220]
I0513 17:39:13.059327  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (6.231864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47222]
I0513 17:39:13.059669  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22/status: (6.563356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47218]
I0513 17:39:13.062661  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (1.60642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47218]
I0513 17:39:13.062701  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.674548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47222]
I0513 17:39:13.063039  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.063195  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23
I0513 17:39:13.063213  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23
I0513 17:39:13.063292  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.063333  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.064909  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (1.136524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47224]
I0513 17:39:13.067414  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.319124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47226]
I0513 17:39:13.069768  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23/status: (5.752955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47222]
I0513 17:39:13.071683  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (1.208446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47226]
I0513 17:39:13.072006  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.072226  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24
I0513 17:39:13.072253  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24
I0513 17:39:13.072364  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.072413  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.075785  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.555898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47228]
I0513 17:39:13.077157  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24/status: (4.198099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47226]
I0513 17:39:13.078017  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (5.293107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47224]
I0513 17:39:13.079223  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (1.66524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47226]
I0513 17:39:13.080315  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.080604  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:13.080647  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:13.080770  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.080837  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.083566  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25/status: (2.397864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47224]
I0513 17:39:13.085837  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (1.736801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47224]
I0513 17:39:13.086164  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.086356  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:13.086371  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:13.086487  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.086594  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.088055  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.316616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47228]
I0513 17:39:13.090427  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.617536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47228]
I0513 17:39:13.091237  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26/status: (4.266905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47224]
I0513 17:39:13.091418  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (4.124246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47230]
I0513 17:39:13.092056  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (1.814759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47232]
E0513 17:39:13.092191  108115 factory.go:686] pod is already present in the activeQ
E0513 17:39:13.092382  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:13.093335  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (1.047051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47224]
I0513 17:39:13.093634  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.093833  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:13.093937  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:13.094073  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.094134  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.095707  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (1.292489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47228]
I0513 17:39:13.096554  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27/status: (2.184493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47232]
I0513 17:39:13.097479  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.403332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47228]
I0513 17:39:13.098002  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (1.158171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47232]
I0513 17:39:13.101661  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.101892  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:13.101931  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:13.102064  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.102136  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.104538  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (1.468698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47234]
I0513 17:39:13.104979  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (1.929806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47228]
I0513 17:39:13.105400  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-14.159e4eca8c0a65cd: (2.242695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47236]
I0513 17:39:13.105705  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.108499  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28
I0513 17:39:13.108539  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28
I0513 17:39:13.108670  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.108713  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.111287  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.618619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47238]
I0513 17:39:13.112027  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28/status: (2.46775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47228]
I0513 17:39:13.112292  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (2.618112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47234]
E0513 17:39:13.112625  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:13.113631  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (1.100188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47228]
I0513 17:39:13.113907  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.114081  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:13.114101  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:13.114233  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.114278  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.117089  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29/status: (2.166244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47234]
I0513 17:39:13.117219  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.009853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47240]
I0513 17:39:13.117410  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (2.536795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47238]
E0513 17:39:13.118438  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:13.119585  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (1.007123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47240]
I0513 17:39:13.119969  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.120143  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:13.120180  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:13.120272  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.120331  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.122817  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (1.971023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47234]
I0513 17:39:13.122856  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.437469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47242]
I0513 17:39:13.123069  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30/status: (2.492267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47240]
I0513 17:39:13.124426  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (1.003734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47234]
I0513 17:39:13.124681  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.124859  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:13.124874  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:13.124975  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.125012  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.127739  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.024109ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.128061  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31/status: (2.846303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47234]
I0513 17:39:13.128428  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (2.526475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47242]
E0513 17:39:13.128755  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:13.129815  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (1.240802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47234]
I0513 17:39:13.130092  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.130575  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:13.130591  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:13.130715  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.130759  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.132705  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (1.437627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47242]
I0513 17:39:13.132758  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (1.734906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.133105  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.133846  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:13.133854  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-16.159e4eca8df077de: (2.184369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47246]
I0513 17:39:13.133862  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:13.133960  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.133993  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.136582  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32/status: (2.196721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47242]
I0513 17:39:13.136798  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.014417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47248]
I0513 17:39:13.136824  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (2.560016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.138557  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (1.060765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47248]
I0513 17:39:13.138772  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.138903  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:13.138922  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:13.139041  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.139087  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.141189  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (1.328604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47248]
I0513 17:39:13.141404  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (1.560686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.141428  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.141663  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:13.141679  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:13.141780  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.141842  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.143288  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (991.728µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47248]
I0513 17:39:13.143770  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33/status: (1.384276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.146062  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (1.694191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.146392  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.146608  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34
I0513 17:39:13.146622  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34
I0513 17:39:13.146722  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.146756  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.148106  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (1.056814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47248]
I0513 17:39:13.148374  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-17.159e4eca8e7ccf0d: (7.434818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47250]
I0513 17:39:13.148647  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34/status: (1.678195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.150076  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (1.162066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.150335  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.150675  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.884832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47250]
I0513 17:39:13.150864  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:13.150876  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:13.150977  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.151011  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.153454  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.261717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.153539  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35/status: (1.979495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47252]
I0513 17:39:13.153625  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (2.436664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47248]
I0513 17:39:13.156020  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.953371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.156789  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (2.856733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47248]
I0513 17:39:13.157015  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.157365  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:13.157380  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:13.157646  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.157738  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.160178  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (2.028655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47252]
I0513 17:39:13.161172  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-18.159e4eca8f371cb8: (1.927626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.161905  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (1.451355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.162433  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.162695  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36
I0513 17:39:13.162708  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36
I0513 17:39:13.162794  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.162831  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.166242  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36/status: (2.813812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.166588  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (3.165155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.167708  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.882822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47256]
I0513 17:39:13.168202  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (977.216µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47244]
I0513 17:39:13.168409  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.168859  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37
I0513 17:39:13.168875  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37
I0513 17:39:13.168885  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (2.660803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47258]
I0513 17:39:13.168974  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.169009  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.171189  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (1.557431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.171275  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.610802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47260]
I0513 17:39:13.172454  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37/status: (3.222981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47256]
I0513 17:39:13.174074  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (1.220679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47260]
I0513 17:39:13.174276  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.174436  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:13.174453  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:13.174621  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.174667  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.176720  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.435831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47262]
I0513 17:39:13.177897  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (2.287561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.178102  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38/status: (3.21132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47260]
I0513 17:39:13.179671  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (1.023888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.179882  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.180040  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:13.180055  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:13.180146  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.180187  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.182819  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39/status: (1.740911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.183258  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.71534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47262]
I0513 17:39:13.184343  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (991.591µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.185036  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (1.131725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47262]
I0513 17:39:13.185126  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0513 17:39:13.185241  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:13.185333  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:13.185356  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:13.185490  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.185552  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.188006  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40/status: (1.806945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47262]
I0513 17:39:13.188312  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (2.408976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.191294  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (1.420287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47262]
I0513 17:39:13.191410  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.197608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.191561  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.191884  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19
I0513 17:39:13.192066  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19
I0513 17:39:13.192238  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.192313  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.194332  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (1.154461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47262]
I0513 17:39:13.194640  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.194822  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:13.194855  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:13.196097  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-19.159e4eca90704aec: (2.880077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47264]
I0513 17:39:13.196111  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (3.254463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.196574  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.196649  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.198160  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (1.107523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.199755  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.146517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47266]
I0513 17:39:13.200374  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41/status: (2.442744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47262]
I0513 17:39:13.201991  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (1.108263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47266]
I0513 17:39:13.202282  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.202498  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:13.202565  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:13.202693  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.202759  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.205231  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (2.215736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47266]
I0513 17:39:13.205491  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.776423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47268]
I0513 17:39:13.205885  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42/status: (1.685133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.207181  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (1.004861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.207392  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.207776  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:13.207791  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:13.207887  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.207924  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
E0513 17:39:13.210876  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:13.211878  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.482808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47270]
I0513 17:39:13.214344  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43/status: (6.039626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47254]
I0513 17:39:13.925363  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:13.927106  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:13.927433  108115 trace.go:81] Trace[675438442]: "Get /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43" (started: 2019-05-13 17:39:13.211210322 +0000 UTC m=+64.210910822) (total time: 716.006313ms):
Trace[675438442]: [10.165µs] [10.165µs] About to Get from storage
Trace[675438442]: [709.649317ms] [709.639152ms] About to write a response
Trace[675438442]: [716.001903ms] [6.352586ms] Transformed response object
Trace[675438442]: [716.006313ms] [4.41µs] END
I0513 17:39:13.928878  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:13.928964  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:13.929554  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (718.468419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47268]
I0513 17:39:13.931199  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:13.933023  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (5.445346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47266]
I0513 17:39:13.934367  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.939299  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:13.939393  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:13.939607  108115 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod", node "node1"
I0513 17:39:13.939629  108115 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0513 17:39:13.939662  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (10.329601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47270]
I0513 17:39:13.939687  108115 factory.go:711] Attempting to bind preemptor-pod to node1
I0513 17:39:13.940565  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:13.940671  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:13.940914  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.946994  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.946459  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod/binding: (6.410121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47266]
I0513 17:39:13.949690  108115 scheduler.go:570] pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 17:39:13.956353  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-0.159e4eca84b1ee62: (6.055539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47274]
I0513 17:39:13.956821  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (7.016029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47266]
I0513 17:39:13.957143  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (9.779997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47268]
I0513 17:39:13.957425  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.957804  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:13.957819  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:13.957935  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.958006  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.958750  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.928303ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47274]
I0513 17:39:13.959629  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (1.231813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47276]
I0513 17:39:13.959973  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (1.672487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47266]
I0513 17:39:13.960212  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.960492  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:13.960554  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:13.960688  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.960763  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.962098  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-3.159e4eca85ada95f: (2.24847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47274]
I0513 17:39:13.962787  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.316154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47276]
I0513 17:39:13.962829  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.210764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47266]
I0513 17:39:13.962994  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.963189  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:13.963203  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:13.963301  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.963335  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.965906  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-4.159e4eca85f98d3d: (2.875712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47274]
I0513 17:39:13.966224  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (2.393638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47266]
I0513 17:39:13.966478  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (2.860862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47276]
I0513 17:39:13.966841  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.967669  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:13.967682  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:13.967767  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.967800  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.969719  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (1.159808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47276]
I0513 17:39:13.969971  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (1.73217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47278]
I0513 17:39:13.970024  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.970212  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:13.970226  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:13.970317  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.970347  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.971300  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.666142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47280]
I0513 17:39:13.971615  108115 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0513 17:39:13.975160  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (3.422382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47276]
I0513 17:39:13.975407  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (3.42576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47280]
I0513 17:39:13.975642  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (4.818905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47278]
I0513 17:39:13.975895  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-6.159e4eca8775048c: (8.508315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47266]
I0513 17:39:13.976630  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.978543  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:13.978562  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:13.978659  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.978694  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.978899  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (1.86655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47280]
I0513 17:39:13.979171  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-1.159e4eca84fc5803: (1.861262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47266]
I0513 17:39:13.980585  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (1.171539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47278]
I0513 17:39:13.981395  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (1.669649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47280]
I0513 17:39:13.981695  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (2.26769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47276]
I0513 17:39:13.981889  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.982326  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-9.159e4eca88f8123b: (1.482503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47266]
I0513 17:39:13.982576  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:13.982634  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:13.982596  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (815.837µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47280]
I0513 17:39:13.983235  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.983298  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.990996  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-2.159e4eca85519e04: (2.212553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47278]
I0513 17:39:13.991113  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.753389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47276]
I0513 17:39:13.991170  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (2.322873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47284]
I0513 17:39:13.991526  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (1.660575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47282]
I0513 17:39:13.991761  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.991908  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:13.991922  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:13.991996  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.992027  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:13.992745  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (1.296725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47276]
I0513 17:39:13.996180  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (3.417715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47286]
I0513 17:39:13.996445  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (3.086132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47276]
I0513 17:39:13.996694  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (4.223223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47282]
I0513 17:39:13.997370  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:13.997556  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:13.997571  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:13.997678  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:13.997724  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.007130  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-10.159e4eca898305d0: (15.442168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47278]
I0513 17:39:14.010926  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44/status: (12.555556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47282]
I0513 17:39:14.011147  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-11.159e4eca89e30a24: (2.06976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47278]
I0513 17:39:14.011276  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (2.543911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47288]
I0513 17:39:14.011553  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (3.844054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47276]
E0513 17:39:14.011671  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:14.013483  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (1.316292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47276]
I0513 17:39:14.013595  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.638111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47288]
I0513 17:39:14.017054  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (1.186732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47286]
I0513 17:39:14.017320  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (1.45626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47288]
I0513 17:39:14.017565  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.017725  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:14.017745  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:14.017858  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.017897  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.021823  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (2.116903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47294]
I0513 17:39:14.021939  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45/status: (2.394134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47286]
I0513 17:39:14.022089  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (2.843442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47288]
I0513 17:39:14.022422  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.699928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47292]
E0513 17:39:14.022441  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:14.023481  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (926.738µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47294]
I0513 17:39:14.023746  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (1.49023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47286]
I0513 17:39:14.023981  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.024188  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:14.024202  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:14.024311  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.024344  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.025556  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (1.442763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47292]
I0513 17:39:14.027910  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.941272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47298]
I0513 17:39:14.027938  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (2.079798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47292]
I0513 17:39:14.028151  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46/status: (2.687681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.028365  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (3.340086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47288]
E0513 17:39:14.028611  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:14.029292  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (1.026654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47298]
I0513 17:39:14.029558  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (876.47µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.029774  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.030062  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:14.030090  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:14.030183  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.030216  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.030378  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (782.18µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47298]
I0513 17:39:14.032708  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (1.149207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47302]
I0513 17:39:14.033426  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47/status: (2.654833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.033456  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (2.953613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47292]
E0513 17:39:14.033842  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:14.034663  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (1.668436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47302]
I0513 17:39:14.034841  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (1.10012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.035023  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.035154  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:14.035164  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:14.035229  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.035279  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.039373  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (3.733866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47300]
E0513 17:39:14.039678  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:14.039900  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (4.905649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47292]
I0513 17:39:14.041126  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (954.315µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47292]
I0513 17:39:14.041984  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48/status: (6.099625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.044633  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (2.285721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.044868  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (3.450582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47292]
I0513 17:39:14.045233  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.045752  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:14.045774  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:14.045883  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.045925  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.048420  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (2.193589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47300]
I0513 17:39:14.048963  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49/status: (2.467656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.049165  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (3.796727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47292]
E0513 17:39:14.049903  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:14.051119  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (20.112346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47298]
I0513 17:39:14.051287  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (1.24029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47292]
I0513 17:39:14.051942  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (1.980406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.052124  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.052329  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:14.052346  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:14.052448  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.052493  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.054166  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (1.25703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47300]
I0513 17:39:14.054242  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (1.645415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.054420  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.054691  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:14.054707  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:14.054785  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.054818  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.055794  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (1.235732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.055907  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (2.957241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47304]
I0513 17:39:14.056897  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (808.466µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47304]
I0513 17:39:14.058090  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (2.590137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47306]
I0513 17:39:14.059030  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (3.570612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47300]
I0513 17:39:14.059401  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (1.435416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47304]
I0513 17:39:14.059494  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.060028  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (8.23624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47298]
I0513 17:39:14.061010  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (814.585µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47306]
I0513 17:39:14.062110  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.663755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47298]
I0513 17:39:14.062913  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (994.497µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47306]
I0513 17:39:14.063405  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28
I0513 17:39:14.063453  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28
I0513 17:39:14.063596  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.063769  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.064491  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (986.665µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47306]
I0513 17:39:14.066986  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (1.938441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47306]
I0513 17:39:14.067023  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-25.159e4eca93718154: (3.796242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47298]
I0513 17:39:14.067319  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (2.184754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47308]
I0513 17:39:14.067330  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (3.212116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.069613  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.071308  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:14.071360  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:14.071767  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (1.890104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.074014  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.074071  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.074180  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (2.092876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.076417  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-26.159e4eca93c95eee: (6.830307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47306]
I0513 17:39:14.076997  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (1.470059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47312]
I0513 17:39:14.077282  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (2.413922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.077593  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (3.29393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47310]
I0513 17:39:14.077821  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.078430  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:14.078487  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:14.078620  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.078695  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.081758  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (2.167143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47312]
I0513 17:39:14.083090  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (1.021946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47312]
I0513 17:39:14.083134  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (3.259981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47310]
I0513 17:39:14.083988  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (3.849429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.084245  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.084316  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (913.871µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47310]
I0513 17:39:14.084775  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:14.084790  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:14.084901  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.084935  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.088340  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-28.159e4eca951aed89: (2.645597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47312]
I0513 17:39:14.090592  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (5.17264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47314]
I0513 17:39:14.090802  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.090964  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:14.090982  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:14.091052  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.091091  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.092978  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (3.841836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47312]
I0513 17:39:14.093530  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-29.159e4eca956fd645: (3.953184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47320]
I0513 17:39:14.096993  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (3.21788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47314]
I0513 17:39:14.097216  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (3.11137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47322]
I0513 17:39:14.097459  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-31.159e4eca9613a595: (2.509635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47320]
I0513 17:39:14.098110  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.098337  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:14.098349  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:14.098419  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.098458  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.099698  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (13.311685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.100619  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (1.403695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47312]
I0513 17:39:14.100874  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (1.891994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47324]
I0513 17:39:14.100960  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.101146  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:14.101158  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:14.101236  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.101278  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.102001  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-39.159e4eca995d92b3: (3.800931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47320]
I0513 17:39:14.109804  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-42.159e4eca9ab5f3ff: (3.10399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47316]
I0513 17:39:14.112779  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-43.159e4eca9b04c0e3: (2.353184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47316]
I0513 17:39:14.116722  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-44.159e4ecaca182cd1: (3.402952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47316]
I0513 17:39:14.117707  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (16.287259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47324]
I0513 17:39:14.118028  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.118363  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (18.201888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47296]
I0513 17:39:14.119830  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (18.263484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47312]
I0513 17:39:14.120057  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:14.120069  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:14.120156  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.120189  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.124332  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (2.574116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47320]
I0513 17:39:14.124497  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-45.159e4ecacb4bf68b: (3.469246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.124649  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (5.466462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47316]
I0513 17:39:14.124887  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (2.855166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47312]
I0513 17:39:14.125147  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.125285  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:14.125308  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:14.125396  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.125431  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.127546  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (2.435336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47320]
I0513 17:39:14.129897  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-46.159e4ecacbae5ea3: (3.567852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47330]
I0513 17:39:14.129926  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (2.263516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.130123  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.130254  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:14.130270  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:14.130356  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.130388  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.130544  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (3.494086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47328]
I0513 17:39:14.132014  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (1.023344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.132227  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (1.463752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47330]
I0513 17:39:14.132429  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.132870  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-47.159e4ecacc07f7e1: (1.939716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47328]
I0513 17:39:14.133018  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:14.133030  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:14.133121  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.133151  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.133655  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (5.44679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47320]
I0513 17:39:14.135582  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (1.570827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47330]
I0513 17:39:14.135613  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (1.41319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.135856  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (1.389649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47320]
I0513 17:39:14.136081  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.136311  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:14.136324  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:14.136395  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.136426  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.137596  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-48.159e4ecacc553701: (2.854813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47328]
I0513 17:39:14.137854  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (1.661932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47330]
I0513 17:39:14.139453  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (1.588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.139615  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (2.044423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.139780  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (1.584159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47328]
I0513 17:39:14.139960  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.141442  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (1.269857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.141903  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-49.159e4ecaccf7a771: (3.207173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47330]
I0513 17:39:14.143161  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (908.32µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.144548  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (990.379µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.146049  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (1.185116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.147415  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (964.733µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.147825  108115 preemption_test.go:598] Cleaning up all pods...
I0513 17:39:14.152328  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (4.200077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.156280  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (3.651849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.160096  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (3.523957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.164955  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (4.50008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.169644  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (4.422346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.173687  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:14.173768  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:14.173958  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:14.173997  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:14.174590  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:14.174620  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:14.177065  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.87742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.177691  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:14.177776  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:14.183846  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (5.070941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.186396  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (16.424038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.189663  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:14.189695  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:14.191570  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (4.56787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.195182  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:14.195233  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:14.197004  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (5.091952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.198786  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (13.435826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.200002  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:14.200039  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:14.201884  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (4.388632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.202449  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.19779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.205193  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.80352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.205522  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:14.205554  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:14.207301  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.483706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.209236  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.394693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.209719  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (6.510854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.211819  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.445839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.215224  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:14.215252  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:14.218809  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.765248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.219686  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (7.297729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.224034  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:14.224165  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:14.226528  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (5.209462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.228590  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.910261ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.231831  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:14.233023  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:14.234067  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (4.835898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.237298  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.867694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.239425  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:14.239961  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:14.242144  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.855386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.244566  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (9.399456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.247656  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:14.247731  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:14.250588  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.669925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.252996  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (7.803616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.257444  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15
I0513 17:39:14.257573  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15
I0513 17:39:14.259913  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.977245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.263551  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (9.590531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.269222  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:14.269256  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:14.272548  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (8.135555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.275456  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.481387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.277988  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:14.278019  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:14.280447  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.897079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.281974  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (8.996013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.284971  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:14.285004  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:14.289257  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.039208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.290360  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (8.020958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.293540  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19
I0513 17:39:14.293575  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19
I0513 17:39:14.295936  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (4.976157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.299445  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:14.299497  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:14.301826  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (8.046585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.302362  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (5.948216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.306161  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.263481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.308021  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:14.308050  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:14.311011  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.770274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.312523  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (9.7405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.315714  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22
I0513 17:39:14.315749  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22
I0513 17:39:14.317069  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (4.259915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.317819  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.698531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.321112  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23
I0513 17:39:14.321153  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23
I0513 17:39:14.322790  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (4.835483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.323591  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.159887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.327394  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24
I0513 17:39:14.327422  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24
I0513 17:39:14.330482  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (7.114442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.330557  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.881696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.333742  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:14.333783  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:14.334815  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (3.982975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.337609  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.392579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.338122  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:14.338159  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:14.339495  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (4.404231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.340775  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.700073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.343225  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:14.343325  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:14.344444  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (4.66178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.345073  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.280575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.347032  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28
I0513 17:39:14.347067  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28
I0513 17:39:14.348313  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.075365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.349189  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (4.412132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.352531  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:14.352570  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:14.352805  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (3.261231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.354234  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.372037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.360187  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:14.360233  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:14.361416  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (8.315779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.362689  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.048606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.364843  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:14.364882  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:14.366566  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (4.744208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.367380  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.726484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.370450  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:14.370518  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:14.371492  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (4.53611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.372551  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.785965ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.376085  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:14.376267  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:14.378414  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (5.025778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.379235  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.459577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.392661  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34
I0513 17:39:14.392720  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34
I0513 17:39:14.395367  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (16.546513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.396521  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.494908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.399630  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:14.399673  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:14.401155  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.252054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.401274  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (4.236424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.404268  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36
I0513 17:39:14.404351  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36
I0513 17:39:14.406740  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.086428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.407455  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (5.847801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.410180  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37
I0513 17:39:14.410222  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37
I0513 17:39:14.411853  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (4.026492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.412269  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.807599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.415119  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:14.415158  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:14.416669  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (4.060736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.417118  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.737361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.425355  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:14.425403  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:14.428251  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.476929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.429120  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (12.173297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.434105  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:14.434148  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:14.435247  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (5.774836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.436142  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.765117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.438118  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:14.438152  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:14.439888  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.544175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.441596  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (6.005259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.445097  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:14.445524  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:14.447418  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (5.1694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.450045  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.174375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.451388  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:14.451416  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:14.453191  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.555718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.455045  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (6.500097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.457395  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:14.457459  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:14.459657  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.917823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.460838  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (5.55867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.464157  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:14.464186  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:14.466032  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.645855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.467729  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (6.649684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.470971  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:14.471050  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:14.474852  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (6.788568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.477756  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:14.477835  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:14.479039  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (7.652152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.479728  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (4.321991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.482785  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:14.482826  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:14.483660  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.656621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.485200  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (4.920043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.485897  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.772671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.489269  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:14.489313  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:14.491449  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.467548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.491999  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (5.867627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.493238  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (857.473µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.527033  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1: (27.392215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.532671  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (4.745058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.536295  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (2.141045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.539117  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (1.116686ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.541707  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (917.045µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.544016  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (909.904µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.547731  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (835.863µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.549995  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (815.527µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.552142  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (741.569µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.554741  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (781.054µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.557118  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (753.703µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.559546  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (756.117µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.562082  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (895.601µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.564462  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (831.805µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.567104  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (1.073335ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.571296  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (1.969489ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.573636  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (796.928µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.575967  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (883.069µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.578275  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (846.609µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.580543  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (838.509µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.583079  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (955.519µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.585401  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (784.153µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.587697  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (766.466µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.590005  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (775.434µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.592345  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (812.506µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.594613  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (804.265µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.596881  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (816.284µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.599081  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (774.627µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.601930  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (958.018µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.607837  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (1.207203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.610346  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (955.351µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.612918  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (869.9µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.615302  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (885.249µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.618317  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (964.306µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.621211  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (1.162376ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.623756  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (1.007216ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.626102  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (843.22µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.628411  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (714.541µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.631764  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (1.829385ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.634022  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (733.16µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.636521  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (876.026µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.639045  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (977.126µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.641327  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (789.56µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.643447  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (752.707µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.646055  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (895.758µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.648657  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (1.079527ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.651219  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (1.039768ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.653863  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (913.044µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.656595  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (1.001195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.658859  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (774.297µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.661410  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (1.002022ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.663985  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (1.044628ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.666431  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (837.306µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.668861  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1: (984.267µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.671865  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (834.598µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.674967  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.398983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.676675  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0
I0513 17:39:14.676699  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0
I0513 17:39:14.676805  108115 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0", node "node1"
I0513 17:39:14.676825  108115 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0", node "node1": all PVCs bound and nothing to do
I0513 17:39:14.676908  108115 factory.go:711] Attempting to bind rpod-0 to node1
I0513 17:39:14.677826  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.446565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.678166  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1
I0513 17:39:14.678186  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1
I0513 17:39:14.678299  108115 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1", node "node1"
I0513 17:39:14.678320  108115 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1", node "node1": all PVCs bound and nothing to do
I0513 17:39:14.678357  108115 factory.go:711] Attempting to bind rpod-1 to node1
I0513 17:39:14.680064  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0/binding: (1.995766ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.680064  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1/binding: (1.182138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.680302  108115 scheduler.go:570] pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 17:39:14.680443  108115 scheduler.go:570] pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 17:39:14.682996  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.424057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.684864  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.416245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.787316  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (1.574772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.894290  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1: (5.694465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.894649  108115 preemption_test.go:561] Creating the preemptor pod...
I0513 17:39:14.898481  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.340292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.898729  108115 preemption_test.go:567] Creating additional pods...
I0513 17:39:14.898823  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:14.898837  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:14.898941  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.898977  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.902341  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (2.46845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47524]
I0513 17:39:14.902393  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.455666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.902689  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod/status: (2.941073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.904273  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.471524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47526]
I0513 17:39:14.904721  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.643612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.904942  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0513 17:39:14.905034  108115 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0513 17:39:14.905042  108115 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0513 17:39:14.908708  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod/status: (3.409328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.909014  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (5.914083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.911545  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.615171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.913921  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.937313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.917139  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.733039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.919100  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (9.16002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.919709  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:14.919729  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:14.919851  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.919889  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.920688  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.639361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.922367  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.44623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.922489  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (1.818404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47530]
I0513 17:39:14.923433  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0/status: (2.65527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47526]
I0513 17:39:14.924397  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.097863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.926891  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.011519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.927937  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:14.928068  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (2.358559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47526]
I0513 17:39:14.928097  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:14.928401  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.928572  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:14.928599  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:14.928681  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.928718  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.929040  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:14.929063  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:14.930877  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.839947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47530]
I0513 17:39:14.931339  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:14.931792  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (5.766026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.932273  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1/status: (3.29067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.932575  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (2.312039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47532]
E0513 17:39:14.933111  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:14.934084  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (1.240161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47332]
I0513 17:39:14.934253  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.53848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47326]
I0513 17:39:14.934332  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.934813  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:14.934862  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:14.934999  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.935074  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.937244  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2/status: (1.820953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47530]
I0513 17:39:14.937670  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (1.984335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47534]
I0513 17:39:14.937984  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.210322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47532]
E0513 17:39:14.939031  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:14.941258  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.667751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47532]
I0513 17:39:14.941546  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (2.247246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47530]
I0513 17:39:14.942221  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.944117  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.369447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47530]
I0513 17:39:14.945695  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:14.945714  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:14.945833  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.945886  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.946362  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.93959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47530]
I0513 17:39:14.952011  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (5.489133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47538]
I0513 17:39:14.952343  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (5.280318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47530]
I0513 17:39:14.952638  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3/status: (6.00451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47534]
E0513 17:39:14.953606  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:14.956642  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (2.978586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47530]
I0513 17:39:14.956839  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.295042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47538]
I0513 17:39:14.957165  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.957322  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:14.957339  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:14.957433  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (21.011424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47536]
I0513 17:39:14.957431  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.957482  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.959600  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.25818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47538]
I0513 17:39:14.959624  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.941551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47536]
I0513 17:39:14.960036  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.972905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47550]
I0513 17:39:14.960074  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4/status: (2.350687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47534]
I0513 17:39:14.962268  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.680891ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47538]
I0513 17:39:14.962687  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.252359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47536]
I0513 17:39:14.962738  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.990256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47534]
I0513 17:39:14.962963  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.963267  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:14.963280  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:14.963376  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.963428  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.964747  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (1.028207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47550]
I0513 17:39:14.965385  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5/status: (1.568254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47534]
I0513 17:39:14.968775  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (2.92578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47534]
I0513 17:39:14.968867  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.429822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47550]
I0513 17:39:14.969124  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.969791  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:14.969806  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:14.969939  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.969981  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.971298  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (890.299µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47554]
I0513 17:39:14.972268  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6/status: (1.813392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47550]
I0513 17:39:14.973802  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (1.06236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47550]
I0513 17:39:14.974007  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.974283  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:14.974310  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:14.974412  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.974446  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.978140  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (3.368759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47554]
I0513 17:39:14.978550  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7/status: (3.872726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47550]
I0513 17:39:14.978900  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (9.489665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47534]
I0513 17:39:14.981357  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (16.640906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47552]
I0513 17:39:14.983167  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.38774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47550]
I0513 17:39:14.983271  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (3.625446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47534]
I0513 17:39:14.983536  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.983820  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:14.983839  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:14.983937  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.983980  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.985899  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.099773ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47534]
I0513 17:39:14.986832  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (2.254276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47556]
I0513 17:39:14.987952  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.634969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47552]
I0513 17:39:14.987959  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.674335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47534]
I0513 17:39:14.987972  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (3.412702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47554]
I0513 17:39:14.988171  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:14.988496  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:14.988527  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:14.988637  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:14.988671  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:14.990017  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.683272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47556]
I0513 17:39:14.990620  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.288899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47552]
I0513 17:39:14.990973  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (1.63607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47560]
I0513 17:39:14.997017  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8/status: (7.379968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47558]
I0513 17:39:14.997433  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-1.159e4ecb019607cd: (6.963874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47556]
I0513 17:39:14.998478  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (7.487941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47552]
I0513 17:39:15.001317  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (2.366713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47558]
I0513 17:39:15.001711  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.98517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47560]
I0513 17:39:15.002710  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.989332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47552]
I0513 17:39:15.004257  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.004414  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:15.004428  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:15.004563  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.004602  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.004607  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.394175ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47558]
I0513 17:39:15.008793  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.320144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47562]
I0513 17:39:15.009873  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (4.816155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47558]
I0513 17:39:15.011010  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.010272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47564]
I0513 17:39:15.012214  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9/status: (7.023422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47560]
I0513 17:39:15.013496  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (869.077µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47560]
I0513 17:39:15.013611  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.103113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47558]
I0513 17:39:15.013913  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.014734  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:15.014753  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:15.014859  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.014897  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.018230  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-2.159e4ecb01f6f3e1: (2.427063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47568]
I0513 17:39:15.018822  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (4.657462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47560]
I0513 17:39:15.019534  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (2.78614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47562]
I0513 17:39:15.019835  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (3.378135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0513 17:39:15.020207  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.020405  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:15.020419  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:15.020527  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.020563  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.023544  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10/status: (1.594612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0513 17:39:15.023821  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (2.135196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47568]
I0513 17:39:15.023903  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.71798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47570]
I0513 17:39:15.025544  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.902556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47560]
I0513 17:39:15.026604  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (2.20143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47568]
I0513 17:39:15.026891  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.027120  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:15.027140  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:15.027222  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.027295  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.030383  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.36239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47574]
I0513 17:39:15.030895  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (2.921461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47572]
I0513 17:39:15.031126  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (4.069821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47560]
I0513 17:39:15.031193  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11/status: (3.601125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47570]
E0513 17:39:15.031781  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.033151  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (996.951µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47572]
I0513 17:39:15.033762  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.033950  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:15.033972  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:15.038388  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.038462  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.034080  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.942938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47574]
I0513 17:39:15.041377  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (1.797184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47580]
I0513 17:39:15.041930  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.989263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47582]
I0513 17:39:15.041949  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12/status: (2.363273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47572]
I0513 17:39:15.043180  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.328324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47574]
I0513 17:39:15.043675  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (1.42719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47572]
E0513 17:39:15.043758  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.043887  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.044034  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:15.044044  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:15.044117  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.044149  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.045773  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.202279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47580]
I0513 17:39:15.045792  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (1.454546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47572]
I0513 17:39:15.045963  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13/status: (1.581711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47582]
I0513 17:39:15.047327  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.628848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47584]
I0513 17:39:15.047958  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (1.670385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47582]
I0513 17:39:15.048238  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.112047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47580]
I0513 17:39:15.048259  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.048450  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:15.048478  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:15.048575  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.048622  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.051057  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-3.159e4ecb029bef13: (1.267511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47580]
I0513 17:39:15.051192  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (1.825032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47572]
I0513 17:39:15.051384  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (1.963157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47584]
I0513 17:39:15.051668  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.052008  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:15.052028  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:15.052145  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.052184  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.052265  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.777739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47588]
I0513 17:39:15.055520  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.594821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47590]
I0513 17:39:15.055763  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14/status: (3.368944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47572]
I0513 17:39:15.055817  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (3.172236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47580]
E0513 17:39:15.057022  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.058483  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (5.833997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47588]
I0513 17:39:15.059914  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (3.240395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47572]
I0513 17:39:15.060203  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.410853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47580]
I0513 17:39:15.060713  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.061650  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15
I0513 17:39:15.061663  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15
I0513 17:39:15.061765  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.061837  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.063521  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (929.365µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47586]
I0513 17:39:15.063793  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.732173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47580]
I0513 17:39:15.064226  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.730115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47594]
I0513 17:39:15.066219  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15/status: (4.167407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47572]
I0513 17:39:15.067091  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.474406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47580]
I0513 17:39:15.067531  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (828.395µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47594]
I0513 17:39:15.067760  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.068014  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:15.068029  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:15.068112  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.068149  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.070274  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (964.807µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47598]
I0513 17:39:15.070496  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16/status: (2.105109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47586]
I0513 17:39:15.070697  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.644342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47580]
I0513 17:39:15.071205  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.165821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47596]
I0513 17:39:15.071934  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (937.616µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47586]
I0513 17:39:15.072346  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.072532  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:15.072558  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:15.072647  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.072685  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.072941  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.834128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47598]
I0513 17:39:15.073955  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (909.927µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47586]
I0513 17:39:15.074356  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17/status: (1.316468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47596]
I0513 17:39:15.074987  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.515999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47598]
I0513 17:39:15.075563  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.740778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47600]
I0513 17:39:15.076612  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.205642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47596]
I0513 17:39:15.077082  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (897.135µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47600]
I0513 17:39:15.077338  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.077477  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:15.077491  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:15.077576  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.077636  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.078360  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.370303ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47596]
I0513 17:39:15.078930  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (1.010445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47600]
I0513 17:39:15.079841  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.284945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47602]
I0513 17:39:15.080144  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18/status: (1.926967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47586]
I0513 17:39:15.081021  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.167853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47596]
I0513 17:39:15.082357  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (1.696381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47602]
I0513 17:39:15.082611  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.082809  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19
I0513 17:39:15.082826  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19
I0513 17:39:15.082922  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.082966  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.083672  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.571308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47596]
I0513 17:39:15.084935  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.469095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47604]
I0513 17:39:15.085200  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (2.017922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47602]
I0513 17:39:15.085442  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19/status: (2.245418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47600]
I0513 17:39:15.086679  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (813.543µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47602]
I0513 17:39:15.086709  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.217616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47596]
I0513 17:39:15.086969  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.087100  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:15.087115  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:15.087187  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.087226  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.088341  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.24251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47602]
I0513 17:39:15.089723  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20/status: (2.291547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47604]
I0513 17:39:15.089765  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.95661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47608]
I0513 17:39:15.091128  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.542264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47602]
I0513 17:39:15.091606  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (1.490885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47604]
I0513 17:39:15.091949  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.092128  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:15.092144  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:15.092216  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.092260  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.092704  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (1.009267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47606]
E0513 17:39:15.092954  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.093719  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.056314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47608]
I0513 17:39:15.094728  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (1.413769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47606]
I0513 17:39:15.094773  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21/status: (2.232162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47602]
E0513 17:39:15.095585  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.096773  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (1.040133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47602]
I0513 17:39:15.097118  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.097363  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22
I0513 17:39:15.097378  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22
I0513 17:39:15.097485  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.097542  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.102475  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22/status: (4.484254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47606]
I0513 17:39:15.102627  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (4.820265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47608]
I0513 17:39:15.108347  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (5.442803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47606]
I0513 17:39:15.108721  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.108928  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23
I0513 17:39:15.108951  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23
I0513 17:39:15.109098  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.109139  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (10.901713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47610]
I0513 17:39:15.109158  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.110684  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (1.222398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47608]
I0513 17:39:15.111574  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.641988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47612]
I0513 17:39:15.112405  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23/status: (2.981727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47606]
I0513 17:39:15.113923  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (1.106387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47612]
I0513 17:39:15.114212  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.114390  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24
I0513 17:39:15.114408  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24
I0513 17:39:15.114523  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.114566  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.116490  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.309695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
I0513 17:39:15.116909  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24/status: (2.11186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47608]
I0513 17:39:15.116958  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (2.124551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47612]
I0513 17:39:15.118399  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (1.03835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47608]
I0513 17:39:15.118651  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.118825  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:15.118842  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:15.118941  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.118986  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.121060  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.678379ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
I0513 17:39:15.121578  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25/status: (2.182417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47608]
I0513 17:39:15.122369  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (1.603718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47616]
E0513 17:39:15.122600  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.122837  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (888.399µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47608]
I0513 17:39:15.123085  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.123239  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:15.123255  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:15.123335  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.123373  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.125599  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26/status: (1.99234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47616]
I0513 17:39:15.125876  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.788213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47618]
I0513 17:39:15.127003  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (3.136098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
E0513 17:39:15.127231  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.128235  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (1.795026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47616]
I0513 17:39:15.128577  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.128706  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:15.128730  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:15.128809  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.128855  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.131402  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (1.844617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47618]
I0513 17:39:15.131813  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27/status: (1.989894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
E0513 17:39:15.131957  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.133344  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (1.189954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
I0513 17:39:15.133637  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.133679  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.624349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47618]
I0513 17:39:15.133836  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28
I0513 17:39:15.133855  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28
I0513 17:39:15.133946  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.133999  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.135941  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.287153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47622]
I0513 17:39:15.136056  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (1.814149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
I0513 17:39:15.136709  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28/status: (2.465373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47620]
I0513 17:39:15.138494  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (1.280813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
I0513 17:39:15.138882  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.139093  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:15.139133  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:15.139256  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.139359  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.140795  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (1.092419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
I0513 17:39:15.141570  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29/status: (1.605667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47622]
I0513 17:39:15.142236  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.273399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0513 17:39:15.142932  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (916.484µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47622]
I0513 17:39:15.143213  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.143379  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:15.143401  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:15.143603  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.143705  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.145893  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (1.967701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
I0513 17:39:15.145931  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30/status: (1.94019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0513 17:39:15.145922  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.631104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47626]
E0513 17:39:15.146302  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.147791  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (1.186413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0513 17:39:15.147990  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.148169  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:15.148186  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:15.148255  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.148298  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.149456  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (1.008932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0513 17:39:15.149790  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.149959  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:15.149980  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:15.150146  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.150212  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (1.604165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
I0513 17:39:15.150212  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.151959  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-11.159e4ecb077626e6: (2.200791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0513 17:39:15.152089  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (1.479272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
I0513 17:39:15.152097  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31/status: (1.349009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47628]
I0513 17:39:15.153979  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.617526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
I0513 17:39:15.154556  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (1.994969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47628]
I0513 17:39:15.154857  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.155018  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:15.155033  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:15.155103  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.155145  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.156572  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (1.261499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
I0513 17:39:15.156927  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.157133  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:15.157170  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:15.157258  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.157298  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.157941  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-12.159e4ecb08204563: (2.003887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47630]
I0513 17:39:15.158913  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (1.712289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0513 17:39:15.159403  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32/status: (1.897928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
I0513 17:39:15.160256  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.403195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47630]
I0513 17:39:15.160543  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (2.81825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47632]
I0513 17:39:15.160755  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (885.009µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47614]
E0513 17:39:15.160770  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.161064  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.161221  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:15.161235  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:15.161325  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.161367  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.163304  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (1.349272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
I0513 17:39:15.164031  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.634505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47634]
I0513 17:39:15.164032  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33/status: (2.422453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47630]
E0513 17:39:15.164775  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.166135  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (1.044824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47630]
I0513 17:39:15.166538  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.166792  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34
I0513 17:39:15.166813  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34
I0513 17:39:15.166905  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.166945  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.169133  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.632215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0513 17:39:15.170699  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (1.232399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47634]
I0513 17:39:15.170925  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34/status: (2.880574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47624]
E0513 17:39:15.171005  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.172357  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (1.009074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47634]
I0513 17:39:15.172598  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.172751  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:15.172766  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:15.172859  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.172903  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.174125  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (1.009043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0513 17:39:15.175556  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.877853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47638]
I0513 17:39:15.176543  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35/status: (3.415745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47634]
I0513 17:39:15.177858  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (855.362µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47638]
I0513 17:39:15.178079  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.178328  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:15.178345  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:15.178447  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.178497  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.179548  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (850.977µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0513 17:39:15.179579  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (917.302µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47638]
I0513 17:39:15.179884  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.180064  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36
I0513 17:39:15.180082  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36
I0513 17:39:15.180174  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.180220  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.180621  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-14.159e4ecb08f1f409: (1.431412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47640]
I0513 17:39:15.181531  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (1.034042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0513 17:39:15.182456  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.196304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47640]
I0513 17:39:15.182953  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36/status: (2.537545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47638]
I0513 17:39:15.184629  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (1.17828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47640]
I0513 17:39:15.184900  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.185040  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37
I0513 17:39:15.185057  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37
I0513 17:39:15.185144  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.185186  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.186965  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (856.779µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0513 17:39:15.187406  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37/status: (1.990894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47640]
I0513 17:39:15.187648  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.701208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47642]
I0513 17:39:15.188923  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (1.170737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47640]
I0513 17:39:15.189225  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.189490  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:15.189527  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:15.189640  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.189698  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.190868  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (939.375µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0513 17:39:15.191764  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.406404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47644]
I0513 17:39:15.192299  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38/status: (2.367384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47642]
I0513 17:39:15.193093  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.278273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0513 17:39:15.193331  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (740.694µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47642]
I0513 17:39:15.193580  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.193711  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:15.193728  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:15.193806  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.193849  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.195071  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (963.7µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47644]
I0513 17:39:15.196153  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39/status: (1.586132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0513 17:39:15.196413  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.966884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47646]
I0513 17:39:15.197770  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (909.106µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0513 17:39:15.198012  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.198166  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:15.198180  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:15.198279  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.198343  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.200049  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (1.430363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47644]
I0513 17:39:15.200352  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40/status: (1.722119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0513 17:39:15.200498  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.615523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47648]
I0513 17:39:15.201944  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (985.596µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47636]
I0513 17:39:15.202179  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.202370  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:15.202391  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:15.202566  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.202651  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.204403  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (1.289866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47644]
I0513 17:39:15.205011  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.815079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47650]
I0513 17:39:15.205287  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41/status: (2.13663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47648]
E0513 17:39:15.205936  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.206694  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (897.344µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47648]
I0513 17:39:15.206924  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.207096  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:15.207167  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:15.207277  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.207321  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.209225  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (859.932µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47650]
I0513 17:39:15.209483  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42/status: (1.881954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47644]
I0513 17:39:15.210437  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.418172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47652]
I0513 17:39:15.212143  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (947.692µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47652]
I0513 17:39:15.212397  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.212581  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:15.212597  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:15.212703  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.212744  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.214179  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (1.204174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47650]
I0513 17:39:15.214805  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.635969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47654]
I0513 17:39:15.215683  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43/status: (2.701071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47652]
I0513 17:39:15.217034  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (963.56µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47654]
I0513 17:39:15.217238  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.217397  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:15.217408  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:15.217582  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.217627  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.218707  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (852.459µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47654]
I0513 17:39:15.220036  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.795862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47656]
I0513 17:39:15.221729  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44/status: (3.838438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47650]
I0513 17:39:15.223627  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (995.482µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47656]
I0513 17:39:15.223996  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.224212  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:15.224247  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:15.224357  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.224420  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.226559  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45/status: (1.87027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47656]
I0513 17:39:15.229920  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.121198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47658]
I0513 17:39:15.230477  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (1.837942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47656]
I0513 17:39:15.230577  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (3.560642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47654]
I0513 17:39:15.230748  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0513 17:39:15.230944  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.231073  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:15.231089  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:15.231204  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.231253  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.232673  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (1.137347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47658]
I0513 17:39:15.234179  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.119814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0513 17:39:15.234249  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46/status: (2.696556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47656]
I0513 17:39:15.235596  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (870.66µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0513 17:39:15.235838  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.236027  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:15.236043  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:15.236131  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.236176  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.239041  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.156247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.239567  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (2.762462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47658]
E0513 17:39:15.239788  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:15.240276  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47/status: (3.391982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47660]
I0513 17:39:15.241863  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (1.10901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47658]
I0513 17:39:15.242095  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.242230  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:15.242244  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:15.242352  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.242411  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.244724  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.568906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47664]
I0513 17:39:15.244858  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (1.807414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.245152  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48/status: (2.083524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47658]
I0513 17:39:15.247331  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (1.543941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47664]
I0513 17:39:15.247656  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.247849  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:15.247865  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:15.248015  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.248063  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.249820  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (1.463368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47664]
I0513 17:39:15.249845  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (991.582µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.250374  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.250608  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:15.250656  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:15.250714  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-20.159e4ecb0b08a049: (1.914587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47666]
I0513 17:39:15.250904  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.251002  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.252608  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.369112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47664]
I0513 17:39:15.252778  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (1.623747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.253809  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49/status: (2.251218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47668]
I0513 17:39:15.255290  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (1.018057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.255582  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.255803  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:15.255819  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:15.255944  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.255990  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.257206  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (996.884µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.257349  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (1.138924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47664]
I0513 17:39:15.257609  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.257849  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:15.257868  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:15.257952  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.257981  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.259530  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (1.403266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47664]
I0513 17:39:15.259695  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (1.507964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.259846  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.259992  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:15.260009  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:15.260108  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.260176  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.261093  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-21.159e4ecb0b555e9e: (4.112437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.261489  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (937.925µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47664]
I0513 17:39:15.261584  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (1.159738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.262025  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.262224  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:15.262239  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:15.262322  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.262359  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.264035  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (1.487904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.264039  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-25.159e4ecb0ced3f95: (2.180984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.264223  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.265048  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:15.265575  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:15.265777  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.265814  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.265430  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (1.797119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47664]
I0513 17:39:15.267382  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-26.159e4ecb0d303af9: (2.419171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.267710  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (1.675387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.267967  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.268160  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (1.353522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47664]
I0513 17:39:15.268204  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:15.268421  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:15.268546  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.268586  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.270572  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (1.02419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.272048  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (2.775806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.272523  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.272744  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:15.272787  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:15.272900  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.272967  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.274779  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (1.650855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.275005  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (1.468673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.275319  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.275903  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34
I0513 17:39:15.275958  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34
I0513 17:39:15.276084  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.276117  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.277497  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (1.170615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.277809  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (1.554247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.278186  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.278306  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:15.278324  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:15.278420  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.278455  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.280208  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (1.589006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.281094  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (2.405039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.281636  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.281743  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:15.281758  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:15.281837  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.281874  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.283229  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (1.195879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.283521  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.283637  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:15.283649  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:15.283615  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-27.159e4ecb0d83caed: (13.232165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:15.283739  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.283781  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.283720  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (1.088949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.284868  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (967.024µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.285093  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.285961  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (1.599565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:15.286971  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-30.159e4ecb0e665ff3: (2.149668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47662]
I0513 17:39:15.289169  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-32.159e4ecb0f35cf7e: (1.547558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:15.291108  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-33.159e4ecb0f73f925: (1.358773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:15.293139  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-34.159e4ecb0fc915ba: (1.381604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:15.294670  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.015224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:15.295052  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-41.159e4ecb11e9d23d: (1.317258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.297095  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-45.159e4ecb1335fe17: (1.448083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.303951  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-47.159e4ecb13e967fa: (6.240002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.396015  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (2.14619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.495264  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.480384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.595483  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.656946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.695348  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.560957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.795395  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.591755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.895533  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.775513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.927309  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:15.927348  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:15.927565  108115 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod", node "node1"
I0513 17:39:15.927585  108115 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0513 17:39:15.927651  108115 factory.go:711] Attempting to bind preemptor-pod to node1
I0513 17:39:15.928047  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:15.928267  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:15.928401  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:15.928416  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:15.928567  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:15.928630  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:15.929213  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:15.929256  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:15.930538  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod/binding: (2.498949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:15.930573  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (993.536µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:15.930694  108115 scheduler.go:570] pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 17:39:15.930853  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:15.931498  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:15.932065  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (1.705537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47678]
I0513 17:39:15.932361  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-0.159e4ecb010f3c70: (2.753707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47680]
I0513 17:39:15.934657  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.352952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.001027  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.94647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.001484  108115 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0513 17:39:16.003155  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (1.273438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.004661  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (1.119932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.006145  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (1.13557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.007569  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (945.525µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.008906  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.024511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.010377  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (938.076µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.011799  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (998.136µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.013094  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (856.632µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.014405  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (1.001848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.015726  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (974.068µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.016939  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (910.594µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.018247  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (961.093µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.019553  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (914.168µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.020707  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (846.154µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.022158  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (1.061994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.023319  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (884.517µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.024579  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (927.599µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.025741  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (858.613µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.027299  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (1.039104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.028625  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (1.017329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.030153  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (964.032µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.031926  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (1.260086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.033482  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (1.118363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.035332  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (1.264194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.036921  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (1.080451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.038768  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (1.268557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.042820  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (3.574381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.044655  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (1.305834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.048072  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (1.0585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.049594  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (1.123847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.052851  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (2.375079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.054333  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (1.134849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.058352  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (2.396771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.062382  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (1.828261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.065563  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (1.858432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.066989  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (1.011524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.068561  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (1.02377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.070165  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (1.22042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.072380  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (1.734057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.074037  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (1.24154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.076027  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (1.479638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.077264  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (895.753µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.078981  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (1.175039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.080270  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (926.056µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.083328  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (2.460029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.084849  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (933.195µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.086173  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (964.308µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.088227  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (1.531895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.089812  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (1.107549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.092661  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (2.504098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.092883  108115 preemption_test.go:598] Cleaning up all pods...
I0513 17:39:16.097796  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:16.097829  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:16.100268  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.19638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.101305  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (8.238123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.108218  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:16.108264  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:16.110898  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.323385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.113082  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (8.137027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.116734  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:16.116774  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:16.119404  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.107823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.119908  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (6.501543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.124739  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:16.124786  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:16.127265  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.235639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.129200  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (8.847077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.134062  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:16.134568  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:16.136536  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.577071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.137088  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (7.346873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.141020  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:16.141050  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:16.142729  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (5.326537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.144478  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.042552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.147004  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:16.147044  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:16.148180  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (4.972701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.148740  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.334654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.153945  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:16.153982  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:16.157831  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.94539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.158149  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (9.648137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.161039  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:16.161079  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:16.164026  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.619561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.164715  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (6.312307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.168241  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:16.168292  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:16.174700  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (6.153201ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.177874  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (12.662324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.185496  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:16.186283  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:16.187955  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (9.785892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.191603  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:16.191668  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:16.192426  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (5.750012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.193598  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (5.311343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.196015  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.785578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.197443  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:16.197518  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:16.198719  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (4.827097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.200559  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.338732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.201892  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:16.201931  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:16.203732  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.605065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.203977  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (4.966281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.210387  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:16.210433  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:16.210655  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (6.379729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.214228  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.455665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.215374  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15
I0513 17:39:16.215412  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15
I0513 17:39:16.216951  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (5.900915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.220401  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.527981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.220420  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:16.220694  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:16.222576  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (5.329098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.224133  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.808693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.226139  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:16.226274  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:16.228376  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (4.935649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.230081  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.180804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.233398  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:16.234285  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:16.235199  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (5.591665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.236802  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.709061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.239496  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (3.610287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.244037  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19
I0513 17:39:16.244138  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19
I0513 17:39:16.245182  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:16.245601  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:16.245917  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.419408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.250024  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.064689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.253762  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (13.469889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.257706  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:16.257756  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:16.260655  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (6.104414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.260894  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.82344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.265962  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22
I0513 17:39:16.266014  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22
I0513 17:39:16.272458  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (6.164994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.274212  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (13.14986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.277381  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23
I0513 17:39:16.277549  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23
I0513 17:39:16.280630  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.507603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.282048  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (7.535032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.286761  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24
I0513 17:39:16.286799  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24
I0513 17:39:16.288340  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (4.799253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.289653  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.537623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.292938  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:16.292979  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:16.295540  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (6.645206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.298906  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (5.47428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.302926  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (6.902141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.303393  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:16.303449  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:16.307272  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.719894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.308126  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:16.308188  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:16.309883  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (5.644388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.310824  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.313551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.315288  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (4.910821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.320144  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:16.320194  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:16.321967  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (6.247899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.325006  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.514849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.326782  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:16.326819  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:16.327763  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (4.160061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.329430  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.130969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.330898  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:16.330935  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:16.333209  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (5.136551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.334090  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.795352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.358676  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:16.358726  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:16.360460  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.4287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.360666  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (27.098096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.364128  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:16.364159  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:16.365926  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.526957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.367337  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (5.693482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.370204  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34
I0513 17:39:16.370241  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34
I0513 17:39:16.372641  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (4.788454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.374638  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.1973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.379182  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:16.379222  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:16.380326  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (7.342104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.381210  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.32621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.390274  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36
I0513 17:39:16.390312  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36
I0513 17:39:16.393051  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.475278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.394259  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (12.583951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.397442  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37
I0513 17:39:16.397488  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37
I0513 17:39:16.401158  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.414566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.401646  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (6.96499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.404746  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:16.404785  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:16.406659  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.637636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.406661  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (4.746888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.409867  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:16.409913  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:16.411216  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (4.030291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.412330  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.150393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.413947  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:16.413980  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:16.415640  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (4.150238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.415851  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.676335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.418441  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:16.420666  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (4.282708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.421899  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:16.423339  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:16.423370  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:16.424572  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (3.492388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.425541  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.807331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.427395  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.40044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.427575  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:16.427617  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:16.428849  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (3.912146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.431386  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:16.431451  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:16.433740  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.820718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.435431  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (6.337994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.436318  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.101712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.443457  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:16.443696  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:16.444891  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (7.77681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.446705  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.053996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.448552  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:16.448588  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:16.452981  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.362705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.453041  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (7.682174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.455656  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:16.455696  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:16.457709  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (4.371056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.457995  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.016132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.462698  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:16.462777  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:16.463832  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (4.979386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.464154  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.090363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.467046  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:16.467577  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:16.467842  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (3.444818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.469159  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (957.419µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.469567  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.670178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.473404  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1: (3.835573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.477853  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (4.004837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.480315  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (961.72µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.486311  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (1.151507ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.489055  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (1.105691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.492738  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (1.972823ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.495617  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.216564ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.502444  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (4.637237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.505446  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (1.206691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.507853  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (847.065µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.510461  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (1.013308ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.513339  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (998.878µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.521428  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (1.434044ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.524036  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (1.005868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.526560  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (958.306µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.529004  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (898.521µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.531583  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (963.149µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.534367  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (1.159754ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.536688  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (831.374µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.539174  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (908.363µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.541686  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (901.525µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.544091  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (817.921µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.546637  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (957.893µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.549117  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (863.832µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.551667  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (979.94µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.554008  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (812.338µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.556305  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (803.96µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.558980  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (920.834µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.561495  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (881.886µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.564926  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (1.144324ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.567521  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (1.039895ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.570257  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (1.150422ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.573335  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (900.92µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.575771  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (1.016742ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.578360  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (997.32µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.582608  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (1.882989ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.588333  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (4.222587ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.590857  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (929.394µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.593272  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (803.535µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.596181  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (1.141674ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.603615  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (5.925335ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.610402  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (4.22066ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.613901  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (1.282555ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.616795  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (1.173795ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.619425  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (1.004097ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.621973  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (979.752µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.624424  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (1.016889ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.626785  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (941.312µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.629448  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (1.222201ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.633009  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (2.025135ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.635579  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (854.152µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.638257  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (1.213959ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.642074  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (1.503094ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.644805  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1: (1.032173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.647609  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.184495ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.650018  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.025078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.650668  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0
I0513 17:39:16.650720  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0
I0513 17:39:16.650832  108115 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0", node "node1"
I0513 17:39:16.650851  108115 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0", node "node1": all PVCs bound and nothing to do
I0513 17:39:16.650917  108115 factory.go:711] Attempting to bind rpod-0 to node1
I0513 17:39:16.658529  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (7.349646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.659855  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0/binding: (7.656913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.660889  108115 scheduler.go:570] pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 17:39:16.662229  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1
I0513 17:39:16.662262  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1
I0513 17:39:16.662407  108115 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1", node "node1"
I0513 17:39:16.662428  108115 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1", node "node1": all PVCs bound and nothing to do
I0513 17:39:16.662895  108115 factory.go:711] Attempting to bind rpod-1 to node1
I0513 17:39:16.664020  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.66974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.665901  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1/binding: (2.262227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.666066  108115 scheduler.go:570] pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 17:39:16.668077  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.785972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.762240  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (3.005812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.866144  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1: (1.809412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.866475  108115 preemption_test.go:561] Creating the preemptor pod...
I0513 17:39:16.868648  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.89527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.868927  108115 preemption_test.go:567] Creating additional pods...
I0513 17:39:16.869076  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:16.869100  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:16.869216  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.869260  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.872257  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.978616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.872659  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.822109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47852]
I0513 17:39:16.874177  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod/status: (4.71534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47674]
I0513 17:39:16.874278  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.440283ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47854]
I0513 17:39:16.874810  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.139177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47670]
I0513 17:39:16.876709  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.143448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47852]
I0513 17:39:16.877119  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0513 17:39:16.877351  108115 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0513 17:39:16.877366  108115 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0513 17:39:16.879726  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod/status: (2.051182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47852]
I0513 17:39:16.880760  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (5.169014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47854]
I0513 17:39:16.883094  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.886738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47854]
I0513 17:39:16.884910  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (4.452035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47852]
I0513 17:39:16.885193  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:16.885210  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:16.885333  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.885373  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.887214  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (1.207694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47858]
I0513 17:39:16.887887  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0/status: (1.978679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47856]
I0513 17:39:16.888317  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.916444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47854]
I0513 17:39:16.889458  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (1.151583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47856]
I0513 17:39:16.889691  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.889843  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:16.889889  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:16.890061  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.890137  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.891860  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.254941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47852]
I0513 17:39:16.893924  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (5.091258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47854]
I0513 17:39:16.894129  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1/status: (3.652192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47858]
I0513 17:39:16.894579  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (995.554µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47856]
E0513 17:39:16.895156  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:16.895288  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.980781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47852]
I0513 17:39:16.896433  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.600872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47854]
I0513 17:39:16.897025  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (2.335857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47858]
I0513 17:39:16.897329  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.897553  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:16.897610  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:16.897814  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.897881  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.721706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47852]
I0513 17:39:16.897896  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.899308  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (1.204078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47858]
I0513 17:39:16.901874  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.010799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47860]
I0513 17:39:16.901874  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2/status: (3.690325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47856]
I0513 17:39:16.902048  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (5.064304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47854]
I0513 17:39:16.904895  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (1.805215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47860]
I0513 17:39:16.905223  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.905383  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:16.905402  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:16.905532  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.905577  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.905612  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.492064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47858]
I0513 17:39:16.908639  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.843524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47866]
I0513 17:39:16.909276  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.99291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47864]
I0513 17:39:16.909260  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (2.186576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47858]
I0513 17:39:16.909415  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3/status: (2.492093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47860]
E0513 17:39:16.909563  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:16.911033  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (1.156946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47866]
I0513 17:39:16.911612  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.911845  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:16.911858  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:16.911960  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.912013  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.912037  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.063193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47862]
I0513 17:39:16.913624  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (997.94µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47862]
I0513 17:39:16.914112  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.459339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47870]
I0513 17:39:16.915020  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.722216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47872]
I0513 17:39:16.916530  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.528751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47870]
I0513 17:39:16.916533  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4/status: (3.69967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47866]
I0513 17:39:16.918075  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.084689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47872]
I0513 17:39:16.918420  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.918591  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:16.918603  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:16.918678  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.918726  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.926667  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (1.242155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47872]
I0513 17:39:16.926758  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (1.723075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47862]
I0513 17:39:16.927029  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.927274  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:16.927322  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:16.927519  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.927607  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.928418  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:16.928647  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-0.159e4ecb76363711: (2.30229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47874]
I0513 17:39:16.928726  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:16.929609  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:16.929634  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.124781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47872]
I0513 17:39:16.929955  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5/status: (1.777673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0513 17:39:16.930109  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:16.930446  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (2.613907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47862]
I0513 17:39:16.931816  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.144463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47874]
I0513 17:39:16.932536  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:16.932769  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (1.675771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0513 17:39:16.933004  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.933244  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:16.933281  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:16.933529  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.933576  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.934112  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.947879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47872]
I0513 17:39:16.935010  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (1.18999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47874]
I0513 17:39:16.935303  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.935444  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:16.935521  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:16.935661  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.935733  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.936098  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-1.159e4ecb767edd4a: (1.802394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0513 17:39:16.936252  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (2.42942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47862]
I0513 17:39:16.938274  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (1.553215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0513 17:39:16.938930  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6/status: (2.562363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47872]
I0513 17:39:16.940770  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.042437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47862]
I0513 17:39:16.942196  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (6.878616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47874]
I0513 17:39:16.942407  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (3.009454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47884]
I0513 17:39:16.943463  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.944199  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:16.944220  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:16.944344  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.944388  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.945211  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.383141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47862]
I0513 17:39:16.946594  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (1.392131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47886]
I0513 17:39:16.946983  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7/status: (2.253944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47882]
I0513 17:39:16.947892  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.485655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47888]
I0513 17:39:16.948343  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.091735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47862]
I0513 17:39:16.949093  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (1.144045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47882]
I0513 17:39:16.949558  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.949816  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:16.949859  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:16.949999  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.950066  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.950657  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.622615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47888]
I0513 17:39:16.953923  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.544839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47890]
I0513 17:39:16.954301  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8/status: (3.428914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47882]
I0513 17:39:16.954016  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (3.158639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47886]
E0513 17:39:16.955054  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:16.955377  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.634428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47888]
I0513 17:39:16.956166  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (1.109732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47882]
I0513 17:39:16.956408  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.956603  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:16.956621  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:16.956725  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.956925  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.959559  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.480852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47888]
I0513 17:39:16.960382  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9/status: (2.732107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47882]
I0513 17:39:16.962657  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (1.250018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47882]
I0513 17:39:16.962997  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.963327  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.241262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47888]
I0513 17:39:16.963585  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (5.950317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47886]
I0513 17:39:16.963957  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:16.963973  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:16.964108  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.964142  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.966192  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (1.454596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47882]
I0513 17:39:16.966403  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (2.063393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47886]
I0513 17:39:16.966676  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.966913  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:16.966958  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:16.967160  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.967242  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.967379  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.676127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47888]
I0513 17:39:16.967479  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-3.159e4ecb776a7bdf: (2.279825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47894]
I0513 17:39:16.969688  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (1.939338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47888]
I0513 17:39:16.970220  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10/status: (2.655946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47882]
I0513 17:39:16.971069  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.961843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47886]
I0513 17:39:16.971205  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (12.877196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47892]
E0513 17:39:16.971836  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:16.974360  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.613291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47888]
I0513 17:39:16.974756  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.325544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47886]
I0513 17:39:16.974946  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (4.272551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47894]
I0513 17:39:16.975186  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.976608  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:16.976624  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:16.976747  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.976783  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.979820  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (4.064171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47888]
I0513 17:39:16.980246  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11/status: (3.223718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47892]
I0513 17:39:16.980446  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (2.83705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47896]
E0513 17:39:16.981427  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:16.981760  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.158655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47898]
I0513 17:39:16.983673  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.943003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47888]
I0513 17:39:16.984769  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (3.476706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47892]
I0513 17:39:16.984990  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:16.985219  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:16.985247  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:16.985365  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:16.985418  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:16.987952  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.385558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47900]
I0513 17:39:16.988602  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.856325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47898]
I0513 17:39:16.992015  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (5.552947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47896]
I0513 17:39:16.992419  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12/status: (5.854552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47892]
I0513 17:39:16.994892  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (1.600096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47892]
I0513 17:39:16.998686  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.000085  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:17.000165  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:17.000322  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.000397  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.000907  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (11.818542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47898]
I0513 17:39:17.003729  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (1.603473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47896]
I0513 17:39:17.004122  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (2.43709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47900]
I0513 17:39:17.004601  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-5.159e4ecb78ba99b7: (3.206556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47902]
I0513 17:39:17.005138  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.006401  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:17.006497  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:17.006678  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.006768  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.016459  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (9.428987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47902]
I0513 17:39:17.016838  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13/status: (9.577978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47896]
I0513 17:39:17.018250  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (10.675466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47904]
I0513 17:39:17.019608  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (16.915743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47898]
I0513 17:39:17.019979  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (1.527684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47896]
I0513 17:39:17.020178  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.020354  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:17.020371  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:17.020496  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.020556  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.021911  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.849417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47904]
I0513 17:39:17.022266  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (912.723µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47902]
I0513 17:39:17.022861  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.585096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47906]
I0513 17:39:17.025941  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.26403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47904]
I0513 17:39:17.028242  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.553116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47906]
I0513 17:39:17.030406  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.580483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47906]
I0513 17:39:17.030730  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14/status: (9.89754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47896]
I0513 17:39:17.032407  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (1.140482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47896]
I0513 17:39:17.032864  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.033133  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15
I0513 17:39:17.033185  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.231792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47906]
I0513 17:39:17.033200  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15
I0513 17:39:17.033393  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.033445  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.034982  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (1.250308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47902]
I0513 17:39:17.035909  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.702379ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47910]
I0513 17:39:17.036066  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15/status: (2.222702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47896]
I0513 17:39:17.036880  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.816517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47908]
I0513 17:39:17.041309  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.980799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47908]
I0513 17:39:17.042319  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (5.776718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47896]
I0513 17:39:17.042789  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.043028  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:17.043053  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:17.043189  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.043328  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.049038  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (5.046217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47908]
I0513 17:39:17.049056  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16/status: (1.703778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47902]
I0513 17:39:17.050615  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (2.534176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47912]
I0513 17:39:17.049782  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.367292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47914]
E0513 17:39:17.051174  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:17.052474  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (1.097944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47914]
I0513 17:39:17.052782  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.052880  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.816192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47908]
I0513 17:39:17.053410  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:17.053448  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:17.053632  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.053711  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.056367  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.689858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47914]
I0513 17:39:17.056709  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (2.631372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47912]
I0513 17:39:17.057217  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.784441ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47918]
I0513 17:39:17.058236  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17/status: (2.192733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47916]
I0513 17:39:17.060967  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (2.088176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47916]
I0513 17:39:17.061609  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.498078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47912]
I0513 17:39:17.062601  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.062779  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:17.062800  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:17.062910  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.063940  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.684936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47916]
I0513 17:39:17.064813  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (1.16628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47918]
I0513 17:39:17.065102  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.066228  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.643102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47916]
I0513 17:39:17.067753  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18/status: (2.445604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47918]
I0513 17:39:17.068006  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.249736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47916]
I0513 17:39:17.069037  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.984042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47920]
I0513 17:39:17.070957  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (991.623µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47918]
I0513 17:39:17.071341  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.071486  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.923373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47922]
I0513 17:39:17.071621  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:17.071655  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:17.071847  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.071942  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.073289  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (1.131694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47916]
I0513 17:39:17.074328  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.292763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47918]
I0513 17:39:17.075240  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-8.159e4ecb7a1151f2: (2.388264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.076397  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (1.409478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47926]
I0513 17:39:17.076603  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.781629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47918]
I0513 17:39:17.077218  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.077619  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19
I0513 17:39:17.077669  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19
I0513 17:39:17.077855  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.077926  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.080308  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (2.074816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47926]
I0513 17:39:17.080685  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.496683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.081955  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.68944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47916]
I0513 17:39:17.082547  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19/status: (2.481454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47928]
I0513 17:39:17.082822  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.674212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.083826  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (923.099µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47928]
I0513 17:39:17.084027  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.084224  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:17.084240  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:17.084339  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.084392  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.087640  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (2.284741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47926]
I0513 17:39:17.088565  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (4.777913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.088565  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20/status: (3.450805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47928]
I0513 17:39:17.089019  108115 cacher.go:739] cacher (*core.Pod): 1 objects queued in incoming channel.
I0513 17:39:17.091267  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (1.382108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.091699  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.091929  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:17.091971  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:17.092005  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.708575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47916]
I0513 17:39:17.092235  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.092664  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.094254  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (1.227421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47926]
I0513 17:39:17.096034  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (2.692603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.096279  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.096491  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:17.096531  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:17.096634  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-9.159e4ecb7a79f5cf: (2.998961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47930]
I0513 17:39:17.096650  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.096690  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.098146  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (1.184825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.101770  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.392306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47930]
I0513 17:39:17.101777  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21/status: (4.827628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47926]
I0513 17:39:17.103860  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (1.200949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47930]
I0513 17:39:17.104177  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.104348  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22
I0513 17:39:17.104363  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22
I0513 17:39:17.104527  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.104580  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.106295  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (1.417605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.106729  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.547678ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47932]
I0513 17:39:17.106957  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22/status: (2.086582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47930]
I0513 17:39:17.108425  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (1.108063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47932]
I0513 17:39:17.108756  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.108943  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23
I0513 17:39:17.108972  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23
I0513 17:39:17.109066  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.109123  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.110655  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (1.124201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47932]
I0513 17:39:17.111592  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.150652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.114292  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23/status: (2.610852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47932]
I0513 17:39:17.117150  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (1.518709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47932]
I0513 17:39:17.117405  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.117718  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24
I0513 17:39:17.117741  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24
I0513 17:39:17.117884  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.117945  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.120165  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (1.808952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.122663  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.080816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.122762  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24/status: (4.477931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47932]
I0513 17:39:17.124546  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (1.25974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47932]
I0513 17:39:17.124872  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.125031  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:17.125045  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:17.125193  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.125321  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.126810  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (1.245841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47934]
I0513 17:39:17.126819  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (1.248732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.127230  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.127395  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:17.127421  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:17.127571  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.127624  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.128845  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-11.159e4ecb7ba90620: (1.967177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47936]
I0513 17:39:17.129732  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (1.736208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.130032  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25/status: (2.017559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47934]
I0513 17:39:17.131802  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.309159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47936]
I0513 17:39:17.131856  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (1.42738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47934]
I0513 17:39:17.132083  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.132250  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:17.132267  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:17.132349  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.132389  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.135055  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26/status: (2.432245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47934]
I0513 17:39:17.135938  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.178974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.136869  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (1.400487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47934]
I0513 17:39:17.136912  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (1.203063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47940]
E0513 17:39:17.137164  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:17.137428  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.137629  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:17.137648  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:17.137780  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.137828  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.141726  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27/status: (3.656181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.141826  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.687677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47938]
I0513 17:39:17.142044  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (3.435767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47942]
I0513 17:39:17.143687  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (1.243624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.143921  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.144565  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28
I0513 17:39:17.144584  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28
I0513 17:39:17.144688  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.144731  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.147360  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28/status: (2.409234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47924]
I0513 17:39:17.147360  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.013287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47944]
I0513 17:39:17.147615  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (2.325476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47938]
I0513 17:39:17.149366  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (1.240235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47944]
I0513 17:39:17.149621  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.149778  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:17.149793  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:17.149905  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.149952  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.152624  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29/status: (2.280841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47944]
I0513 17:39:17.152665  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (828.937µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47946]
E0513 17:39:17.152873  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:17.153481  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.05701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47938]
I0513 17:39:17.154218  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (1.215439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47946]
I0513 17:39:17.154484  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.154682  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:17.154699  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:17.154784  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.154821  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.156058  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (910.675µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47944]
I0513 17:39:17.156743  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30/status: (1.710499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47938]
I0513 17:39:17.157201  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.742574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47948]
I0513 17:39:17.158460  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (919.177µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47938]
I0513 17:39:17.159889  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.160559  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:17.160579  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:17.160698  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.160742  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.163035  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.435581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47950]
I0513 17:39:17.163214  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (1.803266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47944]
I0513 17:39:17.163609  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31/status: (2.618896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47948]
I0513 17:39:17.165078  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (1.06337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47948]
I0513 17:39:17.165342  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.168097  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:17.168121  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:17.168351  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.168400  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.169810  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (1.095499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47950]
I0513 17:39:17.171907  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.198002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47944]
I0513 17:39:17.172411  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32/status: (1.949139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47950]
I0513 17:39:17.174063  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (1.140777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47944]
I0513 17:39:17.174269  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.174418  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:17.174453  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:17.174574  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.174634  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.176728  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (1.637386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47952]
I0513 17:39:17.180709  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.192089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47954]
I0513 17:39:17.183394  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33/status: (8.1056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47944]
I0513 17:39:17.185377  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (1.15945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47954]
I0513 17:39:17.185822  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.186044  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34
I0513 17:39:17.186069  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34
I0513 17:39:17.186164  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.186223  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.193771  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34/status: (6.632006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47954]
I0513 17:39:17.195018  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (8.368123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47952]
I0513 17:39:17.195116  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.825928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47956]
I0513 17:39:17.197966  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (3.320856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47960]
I0513 17:39:17.198222  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (3.924453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47954]
I0513 17:39:17.198420  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.198739  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:17.198759  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:17.198870  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.198912  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.201415  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (2.084115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47952]
I0513 17:39:17.201898  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35/status: (2.508226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47956]
E0513 17:39:17.202162  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:17.202207  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.035427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47964]
I0513 17:39:17.203870  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (1.204462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47956]
I0513 17:39:17.204119  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.204258  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36
I0513 17:39:17.204280  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36
I0513 17:39:17.204372  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.204420  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.205931  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (1.245073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47956]
I0513 17:39:17.207072  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.446981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47966]
I0513 17:39:17.208473  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36/status: (3.456807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47952]
I0513 17:39:17.210003  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (1.073822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47966]
I0513 17:39:17.210298  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.210541  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37
I0513 17:39:17.210565  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37
I0513 17:39:17.210685  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.210733  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.212739  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (1.676988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47956]
I0513 17:39:17.213413  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.557374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47968]
I0513 17:39:17.215353  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37/status: (4.377914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47966]
I0513 17:39:17.216891  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (993.87µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47968]
I0513 17:39:17.217238  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.217400  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:17.217418  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:17.217574  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.217626  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.219484  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (1.323286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47956]
I0513 17:39:17.219826  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (2.041426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47968]
I0513 17:39:17.220182  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.220442  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:17.220494  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:17.220685  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.220779  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.223186  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-16.159e4ecb7fa02515: (3.192667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47956]
I0513 17:39:17.223481  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (1.623766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47972]
I0513 17:39:17.223727  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38/status: (2.574193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47968]
E0513 17:39:17.224020  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:17.225191  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (1.072956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47968]
I0513 17:39:17.225461  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.225676  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:17.225715  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:17.225739  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47956]
I0513 17:39:17.225871  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.226023  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.228365  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.652805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47974]
I0513 17:39:17.229005  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39/status: (2.731785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47968]
I0513 17:39:17.229648  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (3.290481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47970]
E0513 17:39:17.230389  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:17.230671  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (1.060621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47968]
I0513 17:39:17.231014  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.231160  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:17.231178  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:17.231278  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.231320  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.233178  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.509831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47974]
I0513 17:39:17.234189  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40/status: (2.355721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47970]
I0513 17:39:17.235447  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (908.759µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47970]
I0513 17:39:17.236054  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (1.017201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47974]
I0513 17:39:17.236221  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.236370  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:17.236424  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
E0513 17:39:17.236629  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:17.236765  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.236832  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.241975  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (4.926324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47970]
I0513 17:39:17.242161  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41/status: (5.120018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47974]
E0513 17:39:17.242239  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:17.242521  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.919359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47976]
I0513 17:39:17.243925  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (1.251473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47974]
I0513 17:39:17.244186  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.244409  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:17.244440  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:17.245204  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.245247  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.247998  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (2.075501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47970]
I0513 17:39:17.248418  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42/status: (2.444417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47976]
I0513 17:39:17.250289  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (1.468679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47976]
I0513 17:39:17.251078  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.251840  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.772934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47978]
I0513 17:39:17.252099  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:17.252123  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:17.252224  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.252277  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.253535  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (955.309µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47976]
I0513 17:39:17.255062  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.408796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47980]
I0513 17:39:17.256875  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43/status: (4.407901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47970]
I0513 17:39:17.258654  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (1.161183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47980]
I0513 17:39:17.261560  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.261786  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:17.261801  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:17.261931  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.262002  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.263654  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (1.338917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47980]
I0513 17:39:17.264476  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.74751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47982]
I0513 17:39:17.265629  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44/status: (3.221918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47976]
I0513 17:39:17.267189  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (1.148336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47982]
I0513 17:39:17.267408  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.267630  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:17.267646  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:17.267746  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.267828  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.269280  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (1.225424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47980]
I0513 17:39:17.269984  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45/status: (1.92943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47982]
I0513 17:39:17.270944  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.933223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47984]
I0513 17:39:17.271778  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (1.276289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47982]
I0513 17:39:17.272176  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.272371  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:17.272391  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:17.272463  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.272589  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.274105  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (919.935µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47980]
I0513 17:39:17.274419  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46/status: (1.594066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47984]
I0513 17:39:17.275812  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (1.020591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47984]
I0513 17:39:17.276059  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.276258  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.989854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47986]
I0513 17:39:17.276566  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:17.276639  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:17.276797  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.276857  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.278918  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47/status: (1.870701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47984]
I0513 17:39:17.279819  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.118377ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47988]
I0513 17:39:17.280971  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (3.217649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47980]
E0513 17:39:17.282555  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:17.282704  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (2.259277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47984]
I0513 17:39:17.282985  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.283171  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:17.283189  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:17.283289  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.283333  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.285377  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (1.3076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47988]
I0513 17:39:17.286198  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48/status: (2.602404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47980]
I0513 17:39:17.287954  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (1.022145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47980]
I0513 17:39:17.288267  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.288638  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:17.288684  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:17.288872  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.288939  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.290156  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.530971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47990]
I0513 17:39:17.290978  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (1.613683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47988]
I0513 17:39:17.291708  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49/status: (2.439089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47980]
I0513 17:39:17.292478  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.710448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47990]
I0513 17:39:17.293526  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (962.247µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47980]
I0513 17:39:17.293913  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.294105  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:17.294135  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:17.294226  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.294334  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.297175  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (2.172559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47988]
I0513 17:39:17.297551  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (2.944645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47990]
I0513 17:39:17.297967  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.298270  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:17.298308  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:17.298409  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.298473  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.299874  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-26.159e4ecb84ef66a4: (4.569764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47992]
I0513 17:39:17.300639  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (1.483635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47990]
I0513 17:39:17.301738  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.302376  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:17.302540  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:17.302817  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.302915  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.303648  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (4.83862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47988]
I0513 17:39:17.303939  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-29.159e4ecb85fb5730: (2.601442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:17.304022  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (2.694692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47992]
I0513 17:39:17.304663  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (1.02894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47996]
I0513 17:39:17.306099  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (2.74887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47990]
I0513 17:39:17.306329  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.306814  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-35.159e4ecb88e65de2: (2.199694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47988]
I0513 17:39:17.310671  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:17.310689  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:17.310900  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.310957  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.312560  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (1.391823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47996]
I0513 17:39:17.312814  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.313239  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:17.313281  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:17.313383  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.313455  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.313647  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-38.159e4ecb8a340b91: (1.941058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47998]
I0513 17:39:17.316089  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (2.283429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47996]
I0513 17:39:17.316123  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (4.79209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:17.316364  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (2.22574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47998]
I0513 17:39:17.316497  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.316770  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:17.316804  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:17.316908  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.317001  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.318659  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (1.494273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47996]
I0513 17:39:17.319034  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (1.728969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:17.320623  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.320902  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-39.159e4ecb8a840ad1: (6.666512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48000]
I0513 17:39:17.320986  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:17.321010  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:17.321120  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.321191  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.323712  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (1.85139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:17.324096  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (2.464951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47996]
I0513 17:39:17.324541  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-40.159e4ecb8ad4f04e: (1.934029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48002]
I0513 17:39:17.324791  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.325141  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:17.325153  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:17.325281  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.325317  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.327800  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-41.159e4ecb8b2906af: (2.508675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48002]
I0513 17:39:17.327843  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (1.761017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:17.328053  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.328553  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (2.706439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48004]
I0513 17:39:17.331448  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-47.159e4ecb8d8bc461: (2.206694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48002]
I0513 17:39:17.403318  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.847396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48004]
I0513 17:39:17.512326  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (2.36149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48004]
I0513 17:39:17.600833  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.823635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48004]
I0513 17:39:17.701203  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (2.263148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48004]
I0513 17:39:17.800900  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.948514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48004]
I0513 17:39:17.901878  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (2.809583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48004]
I0513 17:39:17.927601  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:17.927638  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:17.927823  108115 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod", node "node1"
I0513 17:39:17.927841  108115 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0513 17:39:17.927896  108115 factory.go:711] Attempting to bind preemptor-pod to node1
I0513 17:39:17.928192  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:17.928204  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:17.928311  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.928355  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.929588  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:17.929858  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:17.929889  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:17.930253  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:17.931455  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod/binding: (3.183677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48004]
I0513 17:39:17.931840  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (3.215171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:17.932660  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:17.932666  108115 scheduler.go:570] pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 17:39:17.934083  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (1.119492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48004]
I0513 17:39:17.934196  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-2.159e4ecb76f549de: (2.472323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48008]
I0513 17:39:17.935005  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.935809  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:17.935829  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:17.935940  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.935991  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.937650  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.011805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48008]
I0513 17:39:17.937810  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.626796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:17.938368  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (2.131112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48004]
I0513 17:39:17.938434  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.938649  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:17.938667  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:17.938781  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:17.938822  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:17.940359  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (1.359024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:17.940592  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:17.940702  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (1.436056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:17.944522  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-4.159e4ecb77cc7a8d: (5.543368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48008]
I0513 17:39:17.946812  108115 wrap.go:47] PATCH /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events/ppod-0.159e4ecb76363711: (1.635666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.000494  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.545362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.000976  108115 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0513 17:39:18.002300  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (1.110923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.004071  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (1.289655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.006296  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (1.188239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.007983  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (1.183064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.009343  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (924.699µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.010633  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (897.597µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.012101  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (939.357µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.013433  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (828.174µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.015289  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (999.936µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.016665  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (916.384µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.017867  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (886.917µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.020213  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (1.592862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.022317  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (1.638932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.023663  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (847.377µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.024994  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (936.482µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.026391  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (1.025715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.027737  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (846.821µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.030058  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (1.929453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.031493  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (1.044071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.032775  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (940.161µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.038277  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (3.94086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.043133  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (3.015921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.044992  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (1.407247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.048623  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (3.225394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.050909  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (1.848111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.052306  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (990.746µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.053772  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (1.037597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.055270  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (1.107331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.057070  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (1.287745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.058645  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (1.084148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.063746  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (1.265295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.065391  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (1.199072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.067017  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (964.802µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.068563  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (1.111487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.070273  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (1.297282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.071648  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (916.906µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.073054  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (1.0129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.074523  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (1.014331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.075910  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (978.318µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.077303  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (911.86µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.080463  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (2.687001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.082814  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (1.831121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.085526  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (2.165891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.086944  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (1.008209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.088269  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (896.596µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.089705  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (984.854µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.091207  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (1.002587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.092658  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (1.048933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.093998  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (976.259µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.097853  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (3.444833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.098137  108115 preemption_test.go:598] Cleaning up all pods...
I0513 17:39:18.105766  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:18.105810  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:18.106874  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (8.125576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.107982  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.835876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.110214  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:18.110256  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:18.112645  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.583472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.113097  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (5.622885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.119690  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:18.119748  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:18.121714  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.668294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.123538  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (10.16465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.127386  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:18.127420  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:18.134062  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (9.685643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.137522  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:18.137562  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:18.140159  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (5.70299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.142280  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (9.30559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.145189  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:18.145249  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:18.145682  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (4.380152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.146291  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.928529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.149272  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.146878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.149692  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:18.149745  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:18.151121  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (4.940294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.151387  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.219697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.154628  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:18.154693  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:18.156335  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (4.911045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.156915  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.619487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.159347  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:18.159381  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:18.162865  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.832839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.178264  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (21.415659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.232408  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:18.232459  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:18.237218  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.81772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.245674  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (15.38019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.251829  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:18.251983  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:18.254361  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.010216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.254634  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (7.305287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.259429  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:18.259550  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:18.259918  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (4.905434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.263843  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.193756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.267349  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:18.267405  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:18.269570  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.766467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.269729  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (8.889571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.277584  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:18.277815  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:18.283581  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (13.39401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.284608  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (6.390822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.287975  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:18.288018  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:18.291450  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.211043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.292648  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (8.444588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.296187  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15
I0513 17:39:18.296249  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-15
I0513 17:39:18.299422  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.708978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.300389  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (7.321273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.305667  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:18.305740  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-16
I0513 17:39:18.307227  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (5.351518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.308104  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.926118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.312218  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:18.312274  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-17
I0513 17:39:18.313884  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.339393ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.317025  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (7.952762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.322864  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:18.322927  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-18
I0513 17:39:18.325137  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (7.797366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.325138  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.982753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.330527  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (4.883198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.334130  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:18.334609  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-20
I0513 17:39:18.335885  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (4.628865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.337361  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.380671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.340244  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:18.340312  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-21
I0513 17:39:18.342180  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.551789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.352495  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (15.119424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.356928  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22
I0513 17:39:18.356972  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-22
I0513 17:39:18.367953  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (14.750983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.368745  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.113154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.371236  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23
I0513 17:39:18.371278  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-23
I0513 17:39:18.372324  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (3.874314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.375975  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24
I0513 17:39:18.376027  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-24
I0513 17:39:18.377534  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (4.895594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.380713  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:18.380755  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-25
I0513 17:39:18.381753  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (3.91256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.383043  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (11.476198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.385962  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:18.386003  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-26
I0513 17:39:18.386746  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.747202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.389083  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (6.957487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.389584  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.654242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.391174  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.21437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.393401  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:18.393439  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-27
I0513 17:39:18.412158  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.055367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.413680  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (24.090698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.419734  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28
I0513 17:39:18.419817  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-28
I0513 17:39:18.423421  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (3.190301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.424345  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (10.046615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.427866  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:18.427900  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-29
I0513 17:39:18.429919  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.780628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.430849  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (5.901605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.434292  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:18.434355  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-30
I0513 17:39:18.436320  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.359909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.437562  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (6.442912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.440251  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:18.440325  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-31
I0513 17:39:18.441477  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (3.553799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.442725  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.094499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.447043  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:18.447105  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-32
I0513 17:39:18.451985  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.761612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.452405  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (9.468279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.455401  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:18.455478  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-33
I0513 17:39:18.456533  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (3.66884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.456924  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.157763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.460910  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (4.004201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.464208  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:18.464271  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-35
I0513 17:39:18.466376  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.831547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.468006  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (6.743087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.473220  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36
I0513 17:39:18.473320  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-36
I0513 17:39:18.476106  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (7.743097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.476176  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.294496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.479763  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37
I0513 17:39:18.479811  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-37
I0513 17:39:18.481377  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (4.804292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.482615  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.797929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.486062  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:18.486136  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-38
I0513 17:39:18.487398  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (4.847687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.488305  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.869389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.490636  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:18.490766  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-39
I0513 17:39:18.492352  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (4.602135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.495860  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:18.495893  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-40
I0513 17:39:18.497818  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (5.012092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.500706  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:18.500763  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-41
I0513 17:39:18.502323  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (4.026431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.503924  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (11.795983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.505961  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:18.506002  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-42
I0513 17:39:18.507049  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.909646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.508585  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (5.917347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.509330  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.864005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.512921  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.853715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.513190  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:18.513222  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-43
I0513 17:39:18.515032  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (6.118479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.515779  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.312173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.518871  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:18.518952  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-44
I0513 17:39:18.521321  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (5.755324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.521457  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.270346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.525139  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:18.525185  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-45
I0513 17:39:18.527202  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.758675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.528289  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (6.561966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.532189  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:18.532236  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-46
I0513 17:39:18.534301  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.625862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.535832  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (7.127621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.539073  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:18.539145  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-47
I0513 17:39:18.541150  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.725301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.541716  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (5.495333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.545274  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:18.545316  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-48
I0513 17:39:18.548162  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.562914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.549103  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (7.085092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.552273  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:18.552307  108115 scheduler.go:448] Skip schedule deleting pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-49
I0513 17:39:18.554558  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.929517ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.554604  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (4.906305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.556129  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (1.054674ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.560576  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1: (4.015398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.565158  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (4.148757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.567892  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (964.525µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.570346  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (906.384µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.572740  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (844.151µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.575131  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (892.692µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.577729  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.021534ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.580274  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (983.173µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.582815  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (880.078µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.585306  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (899.991µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.587829  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (982.121µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.590278  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (922.917µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.592633  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (814.553µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.595240  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (1.078812ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.597624  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (873.148µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.600249  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (974.384µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.602860  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-14: (1.078681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.605416  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-15: (862.02µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.607819  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-16: (910.411µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.610395  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-17: (1.021723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.612974  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-18: (988.93µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.615450  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-19: (900.271µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.617912  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-20: (958.967µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.620557  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-21: (1.002613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.623158  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-22: (1.017806ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.625960  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-23: (1.1017ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.629012  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-24: (1.421583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.631442  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-25: (852.259µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.659675  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-26: (26.655662ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.663003  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-27: (1.537994ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.665876  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-28: (1.201449ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.668562  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-29: (1.165538ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.671235  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-30: (1.18977ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.673799  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-31: (1.035289ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.676303  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-32: (1.037797ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.678833  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-33: (983.134µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.681249  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-34: (906.64µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.684043  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-35: (874.744µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.686395  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-36: (902.137µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.689120  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-37: (1.111948ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.691639  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-38: (1.00845ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.694424  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-39: (1.195954ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.696897  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-40: (945.718µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.699407  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-41: (899.525µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.701827  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-42: (887.206µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.704235  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-43: (885.085µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.706847  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-44: (1.097496ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.709250  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-45: (909.833µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.711685  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-46: (914.113µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.714551  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-47: (921.063µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.717356  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-48: (1.296424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.720000  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-49: (1.136474ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.722625  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (1.112342ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.725025  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1: (928.582µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.727515  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (931.437µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.729766  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.823333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.730026  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0
I0513 17:39:18.730049  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0
I0513 17:39:18.730190  108115 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0", node "node1"
I0513 17:39:18.730210  108115 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0", node "node1": all PVCs bound and nothing to do
I0513 17:39:18.730253  108115 factory.go:711] Attempting to bind rpod-0 to node1
I0513 17:39:18.732396  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0/binding: (1.430732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.732619  108115 scheduler.go:570] pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 17:39:18.732687  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.17588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.733800  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1
I0513 17:39:18.733820  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1
I0513 17:39:18.733931  108115 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1", node "node1"
I0513 17:39:18.733948  108115 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1", node "node1": all PVCs bound and nothing to do
I0513 17:39:18.733993  108115 factory.go:711] Attempting to bind rpod-1 to node1
I0513 17:39:18.734787  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.934907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.735806  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1/binding: (1.589131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.735955  108115 scheduler.go:570] pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 17:39:18.737440  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.232025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.835395  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (1.882443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.929801  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:18.929984  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:18.930035  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:18.930331  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:18.932805  108115 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 17:39:18.938442  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-1: (2.087043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.938844  108115 preemption_test.go:561] Creating the preemptor pod...
I0513 17:39:18.941949  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:18.941974  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod
I0513 17:39:18.942111  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.994131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.942156  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:18.942241  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:18.942346  108115 preemption_test.go:567] Creating additional pods...
I0513 17:39:18.946409  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (3.759619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.947570  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (4.086406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48200]
I0513 17:39:18.950355  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.938366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48202]
I0513 17:39:18.950734  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod/status: (6.198459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.951683  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (4.757793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.953081  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod: (1.605474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.953355  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0513 17:39:18.953485  108115 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0513 17:39:18.953498  108115 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0513 17:39:18.955746  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/preemptor-pod/status: (1.878805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.958517  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (6.304657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.961551  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.316372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.962799  108115 wrap.go:47] DELETE /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/rpod-0: (6.34012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.963165  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:18.963199  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0
I0513 17:39:18.963934  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:18.964029  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:18.964238  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.243971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.965570  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.175743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:18.967208  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (1.925031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48204]
I0513 17:39:18.967966  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.88218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:18.968301  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0/status: (3.84594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48200]
I0513 17:39:18.998110  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (27.59363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:19.000196  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-0: (30.879268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48200]
I0513 17:39:19.004412  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (35.106539ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47994]
I0513 17:39:19.005430  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.006662  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:19.006680  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1
I0513 17:39:19.006818  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.006861  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.010063  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (4.407929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:19.010201  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.951344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48208]
I0513 17:39:19.010640  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (2.957946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48204]
I0513 17:39:19.011289  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1/status: (3.38802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48200]
I0513 17:39:19.013330  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.266676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:19.016014  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.227576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:19.018206  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.809572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:19.019430  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-1: (6.469584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48200]
I0513 17:39:19.019767  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.019958  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:19.019977  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2
I0513 17:39:19.020095  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.020147  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.023102  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (4.126673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:19.025176  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.1166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48210]
I0513 17:39:19.025349  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2/status: (4.671844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48200]
I0513 17:39:19.025689  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (5.241148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48208]
I0513 17:39:19.025935  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.369745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:19.029072  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-2: (2.727453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48200]
I0513 17:39:19.029302  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.2226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48010]
I0513 17:39:19.029730  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.029904  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:19.029940  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3
I0513 17:39:19.030050  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.030110  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.034046  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3/status: (3.15784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48210]
I0513 17:39:19.034049  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (3.128939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48212]
I0513 17:39:19.034266  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (4.181993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48208]
I0513 17:39:19.035834  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.002247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48214]
I0513 17:39:19.036054  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-3: (1.422474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48210]
I0513 17:39:19.036269  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.036321  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.606504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48212]
I0513 17:39:19.036539  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:19.036558  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4
I0513 17:39:19.036672  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.036719  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.038828  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4/status: (1.6471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48214]
I0513 17:39:19.039078  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.826052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48210]
I0513 17:39:19.041236  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (1.239686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48214]
I0513 17:39:19.041277  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-4: (4.022744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48216]
I0513 17:39:19.041479  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0513 17:39:19.041541  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:19.041725  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:19.041751  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5
I0513 17:39:19.042097  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.042143  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.042693  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.584529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48210]
I0513 17:39:19.043402  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (910.628µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48214]
I0513 17:39:19.044831  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (6.203507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48218]
I0513 17:39:19.044961  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.903259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48210]
I0513 17:39:19.045353  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5/status: (2.575657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48216]
I0513 17:39:19.046902  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.520168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48218]
I0513 17:39:19.047626  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-5: (1.913402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48216]
I0513 17:39:19.047843  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.048006  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:19.048019  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6
I0513 17:39:19.048091  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.048146  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.049342  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.887874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48218]
I0513 17:39:19.050911  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (2.528593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48214]
I0513 17:39:19.051126  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6/status: (2.660812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48216]
E0513 17:39:19.051620  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:19.052059  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.792977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48222]
I0513 17:39:19.053092  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-6: (1.539682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48216]
I0513 17:39:19.053481  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.053797  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.192667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48218]
I0513 17:39:19.053803  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:19.054328  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7
I0513 17:39:19.054063  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.637692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48222]
I0513 17:39:19.054434  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.054482  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.057727  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.176888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48222]
I0513 17:39:19.057925  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.365523ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48218]
I0513 17:39:19.059655  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.352668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48218]
I0513 17:39:19.059935  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (5.237646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48220]
I0513 17:39:19.060071  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7/status: (5.344691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48214]
E0513 17:39:19.060250  108115 factory.go:686] pod is already present in the activeQ
I0513 17:39:19.061946  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.772734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48218]
I0513 17:39:19.062202  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-7: (1.746351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48220]
I0513 17:39:19.062794  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.062976  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:19.063003  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8
I0513 17:39:19.063096  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.063162  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.065613  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.555269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48220]
I0513 17:39:19.065614  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.728301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48226]
I0513 17:39:19.066491  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (2.768713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48224]
I0513 17:39:19.066685  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8/status: (3.256376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48222]
I0513 17:39:19.068153  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-8: (1.0885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48224]
I0513 17:39:19.068409  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.068586  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:19.068607  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9
I0513 17:39:19.068691  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.068916  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.070027  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.45783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48220]
I0513 17:39:19.071549  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9/status: (2.162217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48222]
I0513 17:39:19.071862  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (2.89ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48226]
I0513 17:39:19.073179  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-9: (1.034078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48222]
I0513 17:39:19.073419  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.073722  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:19.073751  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10
I0513 17:39:19.073763  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.521092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48220]
I0513 17:39:19.073856  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.073907  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.074694  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (4.863348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48228]
I0513 17:39:19.076285  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (2.019549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48226]
I0513 17:39:19.076804  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10/status: (2.672431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48222]
I0513 17:39:19.077442  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.364189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48228]
I0513 17:39:19.077623  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.230943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48220]
I0513 17:39:19.078316  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-10: (1.163626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48222]
I0513 17:39:19.078618  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.078883  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:19.078899  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11
I0513 17:39:19.078991  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.079055  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.079749  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.631575ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48228]
I0513 17:39:19.081485  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (1.302287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48228]
I0513 17:39:19.081957  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (2.588073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48226]
I0513 17:39:19.082019  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.126083ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48230]
I0513 17:39:19.082543  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11/status: (2.989098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48222]
I0513 17:39:19.084352  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (2.4632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48228]
I0513 17:39:19.084386  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-11: (1.321983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48226]
I0513 17:39:19.085020  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.085257  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:19.085280  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12
I0513 17:39:19.085428  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.085493  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.087957  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (1.85287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48232]
I0513 17:39:19.088305  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12/status: (2.580194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48228]
I0513 17:39:19.089672  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (895.726µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48228]
I0513 17:39:19.089903  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.090042  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:19.090061  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13
I0513 17:39:19.090252  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.090298  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0513 17:39:19.093777  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/events: (2.670962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48236]
I0513 17:39:19.094190  108115 wrap.go:47] PUT /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13/status: (3.679994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48232]
I0513 17:39:19.094495  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (3.355861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48234]
I0513 17:39:19.094875  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (4.864335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48228]
I0513 17:39:19.095178  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-12: (4.988145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48230]
I0513 17:39:19.101895  108115 wrap.go:47] GET /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods/ppod-13: (3.662245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48236]
I0513 17:39:19.102156  108115 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 17:39:19.102451  108115 wrap.go:47] POST /api/v1/namespaces/preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/pods: (5.024385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48228]
I0513 17:39:19.102553  108115 scheduling_queue.go:795] About to try and schedule pod preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:19.102566  108115 scheduler.go:452] Attempting to schedule pod: preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14
I0513 17:39:19.102662  108115 factory.go:649] Unable to schedule preemption-racee4d7c714-c87b-4950-95e6-b2def0b46475/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 17:39:19.102703  108115 factory.go:720] Updating pod condition for preemption-racee4d7c714-c87b-495