This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 1419 succeeded
Started2019-05-13 18:15
Elapsed32m43s
Revision
Buildergke-prow-containerd-pool-99179761-nff2
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/94d11037-4f31-414f-b74e-77c0fdc810c2/targets/test'}}
pod0f226c04-75ab-11e9-bdf5-0a580a6c1546
resultstorehttps://source.cloud.google.com/results/invocations/94d11037-4f31-414f-b74e-77c0fdc810c2/targets/test
infra-commitc92d8e09a
pod0f226c04-75ab-11e9-bdf5-0a580a6c1546
repok8s.io/kubernetes
repo-commit3d12466c0221549ffd0651668e078b53482bcad2
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 26s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0513 18:40:23.930117  107570 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0513 18:40:23.930148  107570 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0513 18:40:23.930159  107570 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0513 18:40:23.930169  107570 master.go:233] Using reconciler: 
I0513 18:40:23.932089  107570 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.932193  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.932210  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.932244  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.932311  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.932698  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.932897  107570 store.go:1320] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0513 18:40:23.932969  107570 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.932999  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.933038  107570 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0513 18:40:23.933207  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.933225  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.933264  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.933315  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.934165  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.934543  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.934661  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.934781  107570 store.go:1320] Monitoring events count at <storage-prefix>//events
I0513 18:40:23.934854  107570 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0513 18:40:23.935120  107570 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.935207  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.935223  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.935253  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.935303  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.935740  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.935805  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.936096  107570 store.go:1320] Monitoring limitranges count at <storage-prefix>//limitranges
I0513 18:40:23.936144  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.936134  107570 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.936244  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.936277  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.936319  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.936323  107570 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0513 18:40:23.936414  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.937012  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.937157  107570 store.go:1320] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0513 18:40:23.937310  107570 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.937375  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.937393  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.937431  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.937487  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.937526  107570 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0513 18:40:23.937678  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.937735  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.938706  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.939108  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.939208  107570 store.go:1320] Monitoring secrets count at <storage-prefix>//secrets
I0513 18:40:23.939258  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.939336  107570 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.939395  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.939410  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.939448  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.939471  107570 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0513 18:40:23.939503  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.940266  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.940429  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.940627  107570 store.go:1320] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0513 18:40:23.940769  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.940682  107570 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0513 18:40:23.941968  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.942415  107570 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.942508  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.942582  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.942659  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.942768  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.943258  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.943425  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.943790  107570 store.go:1320] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0513 18:40:23.943835  107570 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0513 18:40:23.944003  107570 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.944079  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.944110  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.944138  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.944279  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.944539  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.944879  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.944939  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.945065  107570 store.go:1320] Monitoring configmaps count at <storage-prefix>//configmaps
I0513 18:40:23.945121  107570 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0513 18:40:23.945323  107570 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.945424  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.945447  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.945480  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.945583  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.945874  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.946032  107570 store.go:1320] Monitoring namespaces count at <storage-prefix>//namespaces
I0513 18:40:23.946095  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.946328  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.946487  107570 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0513 18:40:23.946209  107570 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.947122  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.947173  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.947221  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.947305  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.947376  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.948030  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.948085  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.948246  107570 store.go:1320] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0513 18:40:23.948262  107570 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0513 18:40:23.948397  107570 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.948480  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.948494  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.948547  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.948609  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.949087  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.950988  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.951069  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.951111  107570 store.go:1320] Monitoring nodes count at <storage-prefix>//minions
I0513 18:40:23.951161  107570 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0513 18:40:23.951322  107570 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.951424  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.951788  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.951868  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.952119  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.951950  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.952925  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.953001  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.953041  107570 store.go:1320] Monitoring pods count at <storage-prefix>//pods
I0513 18:40:23.953105  107570 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0513 18:40:23.953169  107570 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.953231  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.953246  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.953273  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.953322  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.953636  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.953867  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.954097  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.954245  107570 store.go:1320] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0513 18:40:23.954289  107570 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0513 18:40:23.954491  107570 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.955341  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.955395  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.955411  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.955448  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.955502  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.955767  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.955918  107570 store.go:1320] Monitoring services count at <storage-prefix>//services/specs
I0513 18:40:23.955956  107570 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.956041  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.956056  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.956083  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.956137  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.956172  107570 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0513 18:40:23.956372  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.957253  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.958294  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.958395  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.958410  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.958442  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.958484  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.958510  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.959010  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.959193  107570 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.959255  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.959269  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.959302  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.959369  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.959413  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.959855  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.959923  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.960226  107570 store.go:1320] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0513 18:40:23.961353  107570 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0513 18:40:23.962512  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.974518  107570 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0513 18:40:23.974553  107570 master.go:425] Enabling API group "authentication.k8s.io".
I0513 18:40:23.974569  107570 master.go:425] Enabling API group "authorization.k8s.io".
I0513 18:40:23.974729  107570 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.974881  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.974954  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.975013  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.975147  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.975905  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.975997  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.976161  107570 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0513 18:40:23.976263  107570 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0513 18:40:23.976400  107570 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.976623  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.976678  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.976757  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.976876  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.977227  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.978082  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.978218  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.978376  107570 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0513 18:40:23.978465  107570 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0513 18:40:23.978907  107570 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.979645  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.979509  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.979705  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.979795  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.979880  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.982796  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.982900  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.983054  107570 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0513 18:40:23.983082  107570 master.go:425] Enabling API group "autoscaling".
I0513 18:40:23.983116  107570 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0513 18:40:23.983231  107570 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.983310  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.983453  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.983530  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.983621  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.984259  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.984442  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.984696  107570 store.go:1320] Monitoring jobs.batch count at <storage-prefix>//jobs
I0513 18:40:23.984764  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.984901  107570 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0513 18:40:23.984976  107570 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.985247  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.985376  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.985454  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.985526  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.985959  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.986117  107570 store.go:1320] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0513 18:40:23.986179  107570 master.go:425] Enabling API group "batch".
I0513 18:40:23.986201  107570 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0513 18:40:23.986144  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.986702  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.986759  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.996900  107570 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.997143  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.997190  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.997273  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:23.997405  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.998332  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:23.998505  107570 store.go:1320] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0513 18:40:23.998518  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:23.998533  107570 master.go:425] Enabling API group "certificates.k8s.io".
I0513 18:40:23.998586  107570 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0513 18:40:23.998715  107570 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:23.998800  107570 client.go:354] parsed scheme: ""
I0513 18:40:23.998810  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:23.999897  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:23.999974  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.000024  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.001006  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.001129  107570 store.go:1320] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0513 18:40:24.001278  107570 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.001366  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.001379  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.001381  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.001434  107570 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0513 18:40:24.001470  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.001619  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.001942  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.002056  107570 store.go:1320] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0513 18:40:24.002070  107570 master.go:425] Enabling API group "coordination.k8s.io".
I0513 18:40:24.002214  107570 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.002269  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.002279  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.002306  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.002348  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.002396  107570 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0513 18:40:24.002601  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.002885  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.002972  107570 store.go:1320] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0513 18:40:24.002975  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.003157  107570 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0513 18:40:24.004142  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.004356  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.004422  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.004883  107570 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.004948  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.004959  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.004988  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.005040  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.005478  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.005519  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.005594  107570 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0513 18:40:24.005658  107570 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0513 18:40:24.005782  107570 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.005910  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.005921  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.005949  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.005987  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.006483  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.006529  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.006641  107570 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0513 18:40:24.006678  107570 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0513 18:40:24.006902  107570 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.007094  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.007110  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.007323  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.007414  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.008036  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.008133  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.008236  107570 store.go:1320] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0513 18:40:24.008399  107570 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0513 18:40:24.008587  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.008676  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.008902  107570 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.009013  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.009059  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.009105  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.009166  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.009700  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.009901  107570 store.go:1320] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0513 18:40:24.009958  107570 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0513 18:40:24.010083  107570 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.010143  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.010214  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.010245  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.010287  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.011273  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.011306  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.011774  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.013769  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.013897  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.014068  107570 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0513 18:40:24.014148  107570 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0513 18:40:24.014217  107570 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.014284  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.014299  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.014330  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.014443  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.017050  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.017202  107570 store.go:1320] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0513 18:40:24.017221  107570 master.go:425] Enabling API group "extensions".
I0513 18:40:24.017271  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.017349  107570 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0513 18:40:24.017455  107570 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.017524  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.017541  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.017580  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.017627  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.017950  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.017983  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.018728  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.018431  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.019154  107570 store.go:1320] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0513 18:40:24.019345  107570 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0513 18:40:24.019515  107570 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.019711  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.019750  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.019851  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.019969  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.020757  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.021102  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.020801  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.021272  107570 store.go:1320] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0513 18:40:24.021332  107570 master.go:425] Enabling API group "networking.k8s.io".
I0513 18:40:24.021343  107570 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0513 18:40:24.021368  107570 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.022060  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.022770  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.022903  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.023010  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.023067  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.023376  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.023397  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.024003  107570 store.go:1320] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0513 18:40:24.024060  107570 master.go:425] Enabling API group "node.k8s.io".
I0513 18:40:24.024077  107570 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0513 18:40:24.024447  107570 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.024577  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.024733  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.024965  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.025355  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.025609  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.026365  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.026572  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.026684  107570 store.go:1320] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0513 18:40:24.026859  107570 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0513 18:40:24.027094  107570 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.027168  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.027184  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.027215  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.027259  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.027526  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.027659  107570 store.go:1320] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0513 18:40:24.027674  107570 master.go:425] Enabling API group "policy".
I0513 18:40:24.027708  107570 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.027763  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.027773  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.027803  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.027910  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.027950  107570 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0513 18:40:24.028021  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.028122  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.028384  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.028516  107570 store.go:1320] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0513 18:40:24.028538  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.028637  107570 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0513 18:40:24.028659  107570 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.028882  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.029712  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.029344  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.029934  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.029740  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.030055  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.031356  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.031404  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.031510  107570 store.go:1320] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0513 18:40:24.031610  107570 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.031664  107570 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0513 18:40:24.033215  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.033245  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.033345  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.033410  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.034340  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.034415  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.034550  107570 store.go:1320] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0513 18:40:24.034625  107570 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0513 18:40:24.034749  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.034960  107570 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.035034  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.035049  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.035088  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.035146  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.035626  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.035692  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.035741  107570 store.go:1320] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0513 18:40:24.035785  107570 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.035868  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.035885  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.035919  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.035969  107570 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0513 18:40:24.035992  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.036153  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.037310  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.037945  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.038050  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.038184  107570 store.go:1320] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0513 18:40:24.038287  107570 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0513 18:40:24.038342  107570 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.038410  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.038432  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.038472  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.038584  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.039210  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.039392  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.039459  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.039865  107570 store.go:1320] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0513 18:40:24.039972  107570 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0513 18:40:24.040746  107570 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.040993  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.041044  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.041240  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.041330  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.041792  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.042184  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.042039  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.042306  107570 store.go:1320] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0513 18:40:24.042464  107570 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.042521  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.042535  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.042582  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.042732  107570 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0513 18:40:24.043499  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.043766  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.043909  107570 store.go:1320] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0513 18:40:24.043940  107570 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0513 18:40:24.044085  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.044115  107570 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0513 18:40:24.045335  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.046279  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.047986  107570 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.048072  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.048088  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.048120  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.048219  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.048572  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.048676  107570 store.go:1320] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0513 18:40:24.048887  107570 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.048956  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.048972  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.049002  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.049057  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.049093  107570 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0513 18:40:24.049412  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.049762  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.050074  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.051579  107570 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0513 18:40:24.051363  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.051556  107570 store.go:1320] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0513 18:40:24.051812  107570 master.go:425] Enabling API group "scheduling.k8s.io".
I0513 18:40:24.052913  107570 watch_cache.go:405] Replace watchCache (rev: 24290) 
I0513 18:40:24.053174  107570 master.go:417] Skipping disabled API group "settings.k8s.io".
I0513 18:40:24.053402  107570 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.053515  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.053593  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.053673  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.053844  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.055775  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.055892  107570 store.go:1320] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0513 18:40:24.056024  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.056032  107570 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.056161  107570 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0513 18:40:24.056210  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.056240  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.056283  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.056374  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.062414  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.063414  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.063484  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.063663  107570 store.go:1320] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0513 18:40:24.063735  107570 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.063888  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.063908  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.063945  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.064021  107570 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0513 18:40:24.064330  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.064717  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.065022  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.065095  107570 store.go:1320] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0513 18:40:24.065940  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.066736  107570 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0513 18:40:24.066968  107570 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.067088  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.067178  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.067258  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.067410  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.067484  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.068409  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.068499  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.068580  107570 store.go:1320] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0513 18:40:24.068704  107570 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0513 18:40:24.068865  107570 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.069484  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.069531  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.069672  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.069497  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.070978  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.072725  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.072885  107570 store.go:1320] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0513 18:40:24.072887  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.073068  107570 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.073139  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.073152  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.073139  107570 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0513 18:40:24.073186  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.073461  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.074209  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.074395  107570 store.go:1320] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0513 18:40:24.074496  107570 master.go:425] Enabling API group "storage.k8s.io".
I0513 18:40:24.074522  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.074582  107570 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0513 18:40:24.074828  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.075203  107570 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.075366  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.075400  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.075516  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.075602  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.075723  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.076295  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.076570  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.076811  107570 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0513 18:40:24.077026  107570 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.077074  107570 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0513 18:40:24.077156  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.077178  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.077230  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.077290  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.077972  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.078026  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.078525  107570 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0513 18:40:24.078840  107570 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.078928  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.078966  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.079009  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.079082  107570 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0513 18:40:24.079308  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.080537  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.081095  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.081141  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.081455  107570 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0513 18:40:24.081634  107570 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.081740  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.081765  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.081810  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.081900  107570 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0513 18:40:24.082506  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.082695  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.084507  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.085070  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.085191  107570 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0513 18:40:24.085529  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.085593  107570 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0513 18:40:24.085329  107570 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.085848  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.085859  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.085892  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.086001  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.086254  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.086366  107570 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0513 18:40:24.086416  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.086513  107570 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.086573  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.086583  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.086624  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.086633  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.086727  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.086765  107570 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0513 18:40:24.087047  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.087211  107570 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0513 18:40:24.087359  107570 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.087464  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.087506  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.087550  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.087629  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.087702  107570 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0513 18:40:24.087926  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.090456  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.090737  107570 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0513 18:40:24.091009  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.091025  107570 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.091155  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.091242  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.091320  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.091362  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.091414  107570 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0513 18:40:24.091422  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.091908  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.091969  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.092134  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.092237  107570 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0513 18:40:24.093121  107570 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0513 18:40:24.093783  107570 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.094061  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.094159  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.094719  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.095914  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.096024  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.096123  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.096670  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.096855  107570 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0513 18:40:24.096959  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.097008  107570 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.097070  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.097080  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.097114  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.097183  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.097325  107570 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0513 18:40:24.104510  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.104723  107570 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0513 18:40:24.105003  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.105077  107570 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0513 18:40:24.105325  107570 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.105417  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.105679  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.105726  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.105845  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.106599  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.106812  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.106905  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.106982  107570 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0513 18:40:24.107056  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.107069  107570 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0513 18:40:24.107112  107570 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.107178  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.107189  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.107223  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.107265  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.107620  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.108804  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.109100  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.110274  107570 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0513 18:40:24.110357  107570 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0513 18:40:24.111136  107570 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.111262  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.111469  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.111558  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.111740  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.112085  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.112945  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.113019  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.113733  107570 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0513 18:40:24.113791  107570 master.go:425] Enabling API group "apps".
I0513 18:40:24.113864  107570 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.114075  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.114209  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.114287  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.113954  107570 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0513 18:40:24.114712  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.115700  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.115797  107570 store.go:1320] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0513 18:40:24.115909  107570 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.116001  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.116016  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.116042  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.116132  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.116203  107570 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0513 18:40:24.116510  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.117224  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.117397  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.117633  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.117756  107570 store.go:1320] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0513 18:40:24.117784  107570 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0513 18:40:24.117859  107570 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7b44c8b6-a91c-4713-b3e5-bdb0d5bf5e42", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0513 18:40:24.118090  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.118101  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.118131  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.118179  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.118238  107570 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0513 18:40:24.118339  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.118686  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.118763  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.118944  107570 store.go:1320] Monitoring events count at <storage-prefix>//events
I0513 18:40:24.118990  107570 master.go:425] Enabling API group "events.k8s.io".
I0513 18:40:24.119057  107570 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0513 18:40:24.119601  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
I0513 18:40:24.121542  107570 watch_cache.go:405] Replace watchCache (rev: 24291) 
W0513 18:40:24.128999  107570 genericapiserver.go:347] Skipping API batch/v2alpha1 because it has no resources.
W0513 18:40:24.139722  107570 genericapiserver.go:347] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0513 18:40:24.145208  107570 genericapiserver.go:347] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0513 18:40:24.146200  107570 genericapiserver.go:347] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0513 18:40:24.148976  107570 genericapiserver.go:347] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0513 18:40:24.171755  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.171889  107570 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0513 18:40:24.171914  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.171954  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.171985  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.172004  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.172176  107570 wrap.go:47] GET /healthz: (518.907µs) 500
goroutine 30057 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012c8a150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012c8a150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00bf31c00, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0108f5160, 0xc00b0644e0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0108f5160, 0xc012c3db00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0108f5160, 0xc012c3db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0108f5160, 0xc012c3db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0108f5160, 0xc012c3db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0108f5160, 0xc012c3db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0108f5160, 0xc012c3db00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0108f5160, 0xc012c3db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0108f5160, 0xc012c3db00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0108f5160, 0xc012c3db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0108f5160, 0xc012c3db00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0108f5160, 0xc012c3db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0108f5160, 0xc012c3da00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0108f5160, 0xc012c3da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0129df4a0, 0xc004ab2220, 0x7374040, 0xc0108f5160, 0xc012c3da00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53090]
I0513 18:40:24.173869  107570 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.172478ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53092]
I0513 18:40:24.208745  107570 wrap.go:47] GET /api/v1/services: (1.445849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53092]
I0513 18:40:24.213925  107570 wrap.go:47] GET /api/v1/services: (1.496372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53092]
I0513 18:40:24.234061  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.235073  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.090726ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53092]
I0513 18:40:24.238871  107570 wrap.go:47] POST /api/v1/namespaces: (3.475161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53092]
I0513 18:40:24.240144  107570 wrap.go:47] GET /api/v1/namespaces/kube-public: (964.422µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53092]
I0513 18:40:24.242308  107570 wrap.go:47] POST /api/v1/namespaces: (1.379193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53092]
I0513 18:40:24.243040  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.243073  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.243082  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.243093  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.243238  107570 wrap.go:47] GET /healthz: (9.299222ms) 500
goroutine 30084 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010ccc460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010ccc460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00b4efc00, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0096355f0, 0xc004100a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a800)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a800)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a800)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a800)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a700)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0096355f0, 0xc012d2a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00f7034a0, 0xc004ab2220, 0x7374040, 0xc0096355f0, 0xc012d2a700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53090]
I0513 18:40:24.243613  107570 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (1.023789ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53092]
I0513 18:40:24.243954  107570 wrap.go:47] GET /api/v1/services: (8.728864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53094]
I0513 18:40:24.245272  107570 wrap.go:47] POST /api/v1/namespaces: (1.317213ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53092]
I0513 18:40:24.251484  107570 wrap.go:47] GET /api/v1/services: (863.22µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53094]
I0513 18:40:24.273081  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.273121  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.273133  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.273142  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.273153  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.273310  107570 wrap.go:47] GET /healthz: (354.571µs) 500
goroutine 30081 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012d129a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012d129a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00bf1dea0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0079be4f8, 0xc012d8c180, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f600)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f600)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f500)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0079be4f8, 0xc012d3f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010cdd860, 0xc004ab2220, 0x7374040, 0xc0079be4f8, 0xc012d3f500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53094]
I0513 18:40:24.344103  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.344139  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.344151  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.344161  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.344168  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.344328  107570 wrap.go:47] GET /healthz: (358.808µs) 500
goroutine 30063 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012c8a930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012c8a930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c12aea0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0108f5220, 0xc012cd0480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a900)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a900)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a900)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a900)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a800)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0108f5220, 0xc012d7a800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0129dfb00, 0xc004ab2220, 0x7374040, 0xc0108f5220, 0xc012d7a800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53094]
I0513 18:40:24.373113  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.373149  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.373161  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.373171  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.373195  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.373344  107570 wrap.go:47] GET /healthz: (382.172µs) 500
goroutine 30093 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010cccaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010cccaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00b906cc0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0096356a8, 0xc004101800, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b900)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b900)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b900)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b900)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b800)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0096356a8, 0xc012d2b800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00f703da0, 0xc004ab2220, 0x7374040, 0xc0096356a8, 0xc012d2b800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53094]
I0513 18:40:24.444059  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.444099  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.444110  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.444118  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.444126  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.444240  107570 wrap.go:47] GET /healthz: (330.431µs) 500
goroutine 30024 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010a59dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010a59dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00b623900, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0047986c8, 0xc003298d80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8700)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8700)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8600)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0047986c8, 0xc010ee8600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010eaa3c0, 0xc004ab2220, 0x7374040, 0xc0047986c8, 0xc010ee8600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53094]
I0513 18:40:24.473087  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.473133  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.473152  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.473161  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.473169  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.473682  107570 wrap.go:47] GET /healthz: (363.714µs) 500
goroutine 30065 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012c8aa80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012c8aa80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c12b3c0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0108f5250, 0xc012cd0c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0108f5250, 0xc012d7af00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0108f5250, 0xc012d7af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0108f5250, 0xc012d7af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0108f5250, 0xc012d7af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0108f5250, 0xc012d7af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0108f5250, 0xc012d7af00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0108f5250, 0xc012d7af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0108f5250, 0xc012d7af00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0108f5250, 0xc012d7af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0108f5250, 0xc012d7af00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0108f5250, 0xc012d7af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0108f5250, 0xc012d7ae00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0108f5250, 0xc012d7ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0129dfce0, 0xc004ab2220, 0x7374040, 0xc0108f5250, 0xc012d7ae00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53094]
I0513 18:40:24.544079  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.544116  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.544127  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.544136  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.544144  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.544301  107570 wrap.go:47] GET /healthz: (378.003µs) 500
goroutine 30110 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010d55730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010d55730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00bf5ac80, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931c620, 0xc006aa1680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931c620, 0xc012df4000)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931c620, 0xc012df4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931c620, 0xc012df4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931c620, 0xc012df4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931c620, 0xc012df4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931c620, 0xc012df4000)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931c620, 0xc012df4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931c620, 0xc012df4000)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931c620, 0xc012df4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931c620, 0xc012df4000)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931c620, 0xc012df4000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931c620, 0xc012d15f00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931c620, 0xc012d15f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012d42600, 0xc004ab2220, 0x7374040, 0xc00931c620, 0xc012d15f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53094]
I0513 18:40:24.575531  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.575568  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.575580  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.575590  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.575600  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.575746  107570 wrap.go:47] GET /healthz: (389.379µs) 500
goroutine 30115 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012c8ac40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012c8ac40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c12b880, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0108f52a0, 0xc012cd1200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b700)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b700)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b600)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0108f52a0, 0xc012d7b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012e08060, 0xc004ab2220, 0x7374040, 0xc0108f52a0, 0xc012d7b600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53094]
I0513 18:40:24.644121  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.644156  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.644195  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.644205  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.644213  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.644376  107570 wrap.go:47] GET /healthz: (380.016µs) 500
goroutine 30112 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010d55880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010d55880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00bf5af00, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931c648, 0xc006aa1e00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931c648, 0xc012df4600)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931c648, 0xc012df4600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931c648, 0xc012df4600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931c648, 0xc012df4600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931c648, 0xc012df4600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931c648, 0xc012df4600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931c648, 0xc012df4600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931c648, 0xc012df4600)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931c648, 0xc012df4600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931c648, 0xc012df4600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931c648, 0xc012df4600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931c648, 0xc012df4500)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931c648, 0xc012df4500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012d427e0, 0xc004ab2220, 0x7374040, 0xc00931c648, 0xc012df4500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53094]
I0513 18:40:24.673047  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.673081  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.673092  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.673100  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.673106  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.673292  107570 wrap.go:47] GET /healthz: (359.965µs) 500
goroutine 30117 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012c8ad90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012c8ad90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c12bac0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0108f52a8, 0xc012cd1980, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7bb00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7bb00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7bb00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7bb00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7ba00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0108f52a8, 0xc012d7ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012e08180, 0xc004ab2220, 0x7374040, 0xc0108f52a8, 0xc012d7ba00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53094]
I0513 18:40:24.744121  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.744151  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.744163  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.744172  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.744179  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.744332  107570 wrap.go:47] GET /healthz: (358.948µs) 500
goroutine 30130 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010d55a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010d55a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00bf5b280, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931c658, 0xc012e28600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931c658, 0xc012df4b00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931c658, 0xc012df4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931c658, 0xc012df4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931c658, 0xc012df4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931c658, 0xc012df4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931c658, 0xc012df4b00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931c658, 0xc012df4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931c658, 0xc012df4b00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931c658, 0xc012df4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931c658, 0xc012df4b00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931c658, 0xc012df4b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931c658, 0xc012df4a00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931c658, 0xc012df4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012d42a80, 0xc004ab2220, 0x7374040, 0xc00931c658, 0xc012df4a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53094]
I0513 18:40:24.773060  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.773094  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.773104  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.773112  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.773119  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.773252  107570 wrap.go:47] GET /healthz: (314.254µs) 500
goroutine 30132 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010d55b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010d55b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00bf5b4c0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931c660, 0xc012e28d80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931c660, 0xc012df4f00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931c660, 0xc012df4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931c660, 0xc012df4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931c660, 0xc012df4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931c660, 0xc012df4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931c660, 0xc012df4f00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931c660, 0xc012df4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931c660, 0xc012df4f00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931c660, 0xc012df4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931c660, 0xc012df4f00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931c660, 0xc012df4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931c660, 0xc012df4e00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931c660, 0xc012df4e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012d42ba0, 0xc004ab2220, 0x7374040, 0xc00931c660, 0xc012df4e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53094]
I0513 18:40:24.844155  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.844185  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.844195  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.844204  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.844210  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.844345  107570 wrap.go:47] GET /healthz: (394.383µs) 500
goroutine 30134 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010d55ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010d55ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00bf5b580, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931c668, 0xc012e29380, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931c668, 0xc012df5300)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931c668, 0xc012df5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931c668, 0xc012df5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931c668, 0xc012df5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931c668, 0xc012df5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931c668, 0xc012df5300)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931c668, 0xc012df5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931c668, 0xc012df5300)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931c668, 0xc012df5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931c668, 0xc012df5300)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931c668, 0xc012df5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931c668, 0xc012df5200)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931c668, 0xc012df5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012d42c60, 0xc004ab2220, 0x7374040, 0xc00931c668, 0xc012df5200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53094]
I0513 18:40:24.873053  107570 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0513 18:40:24.873081  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.873092  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.873100  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.873108  107570 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.873239  107570 wrap.go:47] GET /healthz: (305.756µs) 500
goroutine 30026 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010a59f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010a59f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c0fc1c0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc004798788, 0xc003299980, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc004798788, 0xc010ee9800)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc004798788, 0xc010ee9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc004798788, 0xc010ee9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc004798788, 0xc010ee9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc004798788, 0xc010ee9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc004798788, 0xc010ee9800)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc004798788, 0xc010ee9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc004798788, 0xc010ee9800)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc004798788, 0xc010ee9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc004798788, 0xc010ee9800)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc004798788, 0xc010ee9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc004798788, 0xc010ee9700)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc004798788, 0xc010ee9700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010eaa900, 0xc004ab2220, 0x7374040, 0xc004798788, 0xc010ee9700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53094]
I0513 18:40:24.932051  107570 client.go:354] parsed scheme: ""
I0513 18:40:24.932088  107570 client.go:354] scheme "" not registered, fallback to default scheme
I0513 18:40:24.932139  107570 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0513 18:40:24.932222  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.932968  107570 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0513 18:40:24.933049  107570 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0513 18:40:24.945060  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.945094  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.945106  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.945114  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.945307  107570 wrap.go:47] GET /healthz: (1.424548ms) 500
goroutine 30029 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012e9c070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012e9c070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c0fc260, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc004798790, 0xc005ec0dc0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc004798790, 0xc010ee9c00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc004798790, 0xc010ee9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc004798790, 0xc010ee9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc004798790, 0xc010ee9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc004798790, 0xc010ee9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc004798790, 0xc010ee9c00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc004798790, 0xc010ee9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc004798790, 0xc010ee9c00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc004798790, 0xc010ee9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc004798790, 0xc010ee9c00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc004798790, 0xc010ee9c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc004798790, 0xc010ee9b00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc004798790, 0xc010ee9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010eaaa80, 0xc004ab2220, 0x7374040, 0xc004798790, 0xc010ee9b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53094]
I0513 18:40:24.974438  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:24.974473  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:24.974483  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:24.974490  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:24.974672  107570 wrap.go:47] GET /healthz: (1.660331ms) 500
goroutine 30095 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010cccc40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010cccc40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00b907260, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0096356b0, 0xc012d18580, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bd00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bd00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bd00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bd00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bc00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0096356b0, 0xc012d2bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00f703ec0, 0xc004ab2220, 0x7374040, 0xc0096356b0, 0xc012d2bc00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53094]
I0513 18:40:25.045067  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.045103  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:25.045113  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:25.045121  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:25.045283  107570 wrap.go:47] GET /healthz: (1.392952ms) 500
goroutine 30031 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012e9c230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012e9c230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c0fc4c0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0047987c0, 0xc010d45b80, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0047987c0, 0xc012efa300)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0047987c0, 0xc012efa300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0047987c0, 0xc012efa300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0047987c0, 0xc012efa300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0047987c0, 0xc012efa300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0047987c0, 0xc012efa300)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0047987c0, 0xc012efa300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0047987c0, 0xc012efa300)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0047987c0, 0xc012efa300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0047987c0, 0xc012efa300)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0047987c0, 0xc012efa300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0047987c0, 0xc012efa200)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0047987c0, 0xc012efa200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010eaae40, 0xc004ab2220, 0x7374040, 0xc0047987c0, 0xc012efa200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53094]
I0513 18:40:25.074442  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.074482  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:25.074494  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:25.074503  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:25.074724  107570 wrap.go:47] GET /healthz: (1.695693ms) 500
goroutine 30097 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010ccce00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010ccce00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00b907760, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0096356d0, 0xc012f10000, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e200)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e200)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e200)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e200)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e100)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0096356d0, 0xc012f0e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012ee21e0, 0xc004ab2220, 0x7374040, 0xc0096356d0, 0xc012f0e100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53094]
I0513 18:40:25.146980  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.147010  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:25.147021  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:25.147029  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:25.147197  107570 wrap.go:47] GET /healthz: (1.477113ms) 500
goroutine 30147 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc010cccee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc010cccee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00b907c00, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0096356e0, 0xc000111b80, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e600)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e600)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e500)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0096356e0, 0xc012f0e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012ee24e0, 0xc004ab2220, 0x7374040, 0xc0096356e0, 0xc012f0e500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53094]
I0513 18:40:25.173437  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.356348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.173439  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.541601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53090]
I0513 18:40:25.173610  107570 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.725763ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53094]
I0513 18:40:25.174760  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.174780  107570 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0513 18:40:25.175012  107570 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0513 18:40:25.175022  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0513 18:40:25.176719  107570 wrap.go:47] GET /healthz: (3.437971ms) 500
goroutine 30186 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012f26230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012f26230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00be3b320, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0074dbe80, 0xc002e66f20, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ac00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ac00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ac00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ac00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ab00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0074dbe80, 0xc012f2ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012ea8c00, 0xc004ab2220, 0x7374040, 0xc0074dbe80, 0xc012f2ab00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:25.177226  107570 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (3.229149ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53090]
I0513 18:40:25.178253  107570 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.55549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.178453  107570 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0513 18:40:25.179350  107570 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (744.095µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.179958  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.721514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0513 18:40:25.180171  107570 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (2.15334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53090]
I0513 18:40:25.181922  107570 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.241338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.182105  107570 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0513 18:40:25.182119  107570 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0513 18:40:25.182953  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.177062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0513 18:40:25.184164  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (824.409µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.185611  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (922.394µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.187108  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.019254ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.188361  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (843.909µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.189575  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (753.82µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.190947  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (929.785µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.192162  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (884.551µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.194626  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.992285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.194841  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0513 18:40:25.195868  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (793.273µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.197761  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.516754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.197944  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0513 18:40:25.198959  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (823.391µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.200629  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.269825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.200865  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0513 18:40:25.202796  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.772283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.204967  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.766932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.205253  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0513 18:40:25.207834  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (2.339571ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.210006  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.706279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.210261  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0513 18:40:25.211675  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.134281ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.213484  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.309407ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.213642  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0513 18:40:25.215075  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.161195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.217174  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.688928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.217366  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0513 18:40:25.218515  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (928.4µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.220400  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.405838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.220683  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0513 18:40:25.221712  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (825.153µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.224456  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.309215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.224747  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0513 18:40:25.225795  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (820.388µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.227683  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.436761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.228091  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0513 18:40:25.229530  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.236888ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.232603  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.448396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.232943  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0513 18:40:25.234098  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (942.862µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.236697  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.988217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.237256  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0513 18:40:25.238435  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (955.156µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.241661  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.720338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.241903  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0513 18:40:25.242993  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (860.255µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.244965  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.245026  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.245106  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.656008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.245181  107570 wrap.go:47] GET /healthz: (1.298848ms) 500
goroutine 30280 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01306d500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01306d500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010899f40, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0044369e0, 0xc001de1e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6f00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6f00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6f00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6f00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6e00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0044369e0, 0xc0130e6e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0130ceb40, 0xc004ab2220, 0x7374040, 0xc0044369e0, 0xc0130e6e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.245398  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0513 18:40:25.247501  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.9223ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.249919  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.98434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.250145  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0513 18:40:25.251249  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (849.532µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.253114  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.49438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.253323  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0513 18:40:25.254346  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (814.347µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.256170  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.443653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.256424  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0513 18:40:25.257581  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (883.771µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.259519  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.397738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.259723  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0513 18:40:25.260732  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (779.375µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.262415  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.266599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.262808  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0513 18:40:25.263847  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (807.481µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.265844  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.565233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.266179  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0513 18:40:25.267317  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (952.849µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.269085  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.339911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.269362  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0513 18:40:25.270267  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (710.454µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.272220  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.550307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.272551  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0513 18:40:25.274167  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.274232  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.274284  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.520434ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.274434  107570 wrap.go:47] GET /healthz: (1.285008ms) 500
goroutine 30287 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0131723f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0131723f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0109e0d40, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc004436d30, 0xc013144500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc004436d30, 0xc013177200)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc004436d30, 0xc013177200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc004436d30, 0xc013177200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc004436d30, 0xc013177200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc004436d30, 0xc013177200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc004436d30, 0xc013177200)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc004436d30, 0xc013177200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc004436d30, 0xc013177200)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc004436d30, 0xc013177200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc004436d30, 0xc013177200)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc004436d30, 0xc013177200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc004436d30, 0xc013177100)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc004436d30, 0xc013177100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0130cfbc0, 0xc004ab2220, 0x7374040, 0xc004436d30, 0xc013177100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:25.276440  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.770888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.276698  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0513 18:40:25.278492  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (930.157µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.281273  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.259624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.281563  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0513 18:40:25.299867  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (18.043324ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.303279  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.148288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.303616  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0513 18:40:25.304999  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.101529ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.307150  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.754247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.307342  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0513 18:40:25.308328  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (838.536µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.310217  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.589477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.310397  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0513 18:40:25.311330  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (791.04µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.313130  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.357596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.313558  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0513 18:40:25.314495  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (715.043µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.316553  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.347364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.316723  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0513 18:40:25.317584  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (733.398µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.319215  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.242926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.319453  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0513 18:40:25.321253  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (662.561µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.323140  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.410294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.323483  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0513 18:40:25.324488  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (737.136µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.326171  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.283873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.326420  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0513 18:40:25.327411  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (805.931µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.329581  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.753546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.330173  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0513 18:40:25.331125  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (740.818µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.335712  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.893501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.336213  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0513 18:40:25.337323  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (798.88µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.339804  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.003503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.340257  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0513 18:40:25.341420  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (829.936µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.343186  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.403855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.345163  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.345246  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.345484  107570 wrap.go:47] GET /healthz: (1.699953ms) 500
goroutine 30145 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012f62af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012f62af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0105874a0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931c838, 0xc00d700b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931c838, 0xc012f7ab00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931c838, 0xc012f7ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931c838, 0xc012f7ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931c838, 0xc012f7ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931c838, 0xc012f7ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931c838, 0xc012f7ab00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931c838, 0xc012f7ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931c838, 0xc012f7ab00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931c838, 0xc012f7ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931c838, 0xc012f7ab00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931c838, 0xc012f7ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931c838, 0xc012f7aa00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931c838, 0xc012f7aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012d43c80, 0xc004ab2220, 0x7374040, 0xc00931c838, 0xc012f7aa00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.345768  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0513 18:40:25.346618  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (645.695µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.348462  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.383513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.348857  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0513 18:40:25.349943  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (854.689µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.353251  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.853282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.353518  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0513 18:40:25.354639  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (894.024µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.356523  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.347627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.356969  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0513 18:40:25.358205  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (971.73µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.360192  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.360426  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0513 18:40:25.361769  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.080032ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.363937  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.721358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.364284  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0513 18:40:25.365870  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.168828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.367907  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.570095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.368134  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0513 18:40:25.369192  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (808.821µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.371037  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.434138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.371386  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0513 18:40:25.372568  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (979.77µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.373682  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.373706  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.373880  107570 wrap.go:47] GET /healthz: (999.163µs) 500
goroutine 30311 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013374bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013374bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010cf5760, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc010bae9e8, 0xc004438b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc010bae9e8, 0xc013377600)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc010bae9e8, 0xc013377600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc010bae9e8, 0xc013377600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc010bae9e8, 0xc013377600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc010bae9e8, 0xc013377600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc010bae9e8, 0xc013377600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc010bae9e8, 0xc013377600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc010bae9e8, 0xc013377600)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc010bae9e8, 0xc013377600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc010bae9e8, 0xc013377600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc010bae9e8, 0xc013377600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc010bae9e8, 0xc013377500)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc010bae9e8, 0xc013377500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01343e180, 0xc004ab2220, 0x7374040, 0xc010bae9e8, 0xc013377500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53256]
I0513 18:40:25.376781  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.820554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.377210  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0513 18:40:25.378563  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.134684ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.380686  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.557972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.381258  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0513 18:40:25.382458  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (939.378µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.384624  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.656945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.384890  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0513 18:40:25.385982  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (898.836µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.391787  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.293015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.392155  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0513 18:40:25.393888  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.492956ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.396081  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.7671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.396360  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0513 18:40:25.397632  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.056237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.400370  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.289269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.400604  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0513 18:40:25.401973  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.112183ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.404159  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.768827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.404622  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0513 18:40:25.405998  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.186061ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.408185  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.797227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.408467  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0513 18:40:25.413076  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.057007ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.435533  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.425568ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.435870  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0513 18:40:25.445618  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.445661  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.445850  107570 wrap.go:47] GET /healthz: (1.454472ms) 500
goroutine 30490 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01348dce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01348dce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010eb93a0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc004437a68, 0xc00d701180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc004437a68, 0xc0134fb000)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc004437a68, 0xc0134fb000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc004437a68, 0xc0134fb000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc004437a68, 0xc0134fb000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc004437a68, 0xc0134fb000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc004437a68, 0xc0134fb000)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc004437a68, 0xc0134fb000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc004437a68, 0xc0134fb000)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc004437a68, 0xc0134fb000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc004437a68, 0xc0134fb000)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc004437a68, 0xc0134fb000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc004437a68, 0xc0134faf00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc004437a68, 0xc0134faf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0134f9560, 0xc004ab2220, 0x7374040, 0xc004437a68, 0xc0134faf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.457086  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (4.381136ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.474345  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.474386  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.474768  107570 wrap.go:47] GET /healthz: (1.802866ms) 500
goroutine 30440 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013332af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013332af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010c23f20, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0108f5e00, 0xc013144c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2700)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2700)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2600)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0108f5e00, 0xc0134b2600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0134ae840, 0xc004ab2220, 0x7374040, 0xc0108f5e00, 0xc0134b2600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53256]
I0513 18:40:25.474863  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.721247ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.475104  107570 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0513 18:40:25.494298  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (2.194989ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.515475  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.34276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.515839  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0513 18:40:25.533774  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.679588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.545211  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.545304  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.545518  107570 wrap.go:47] GET /healthz: (1.511751ms) 500
goroutine 30416 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013508930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013508930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010ef0fe0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931cdc0, 0xc002fcd540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931cdc0, 0xc013516c00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931cdc0, 0xc013516c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931cdc0, 0xc013516c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931cdc0, 0xc013516c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931cdc0, 0xc013516c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931cdc0, 0xc013516c00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931cdc0, 0xc013516c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931cdc0, 0xc013516c00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931cdc0, 0xc013516c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931cdc0, 0xc013516c00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931cdc0, 0xc013516c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931cdc0, 0xc013516b00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931cdc0, 0xc013516b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01345dbc0, 0xc004ab2220, 0x7374040, 0xc00931cdc0, 0xc013516b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.554104  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.108993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.554340  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0513 18:40:25.574401  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.574438  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.574557  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.678029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.574633  107570 wrap.go:47] GET /healthz: (1.658076ms) 500
goroutine 30492 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01348de30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01348de30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010eb9600, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc004437aa0, 0xc0135c23c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb800)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb800)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb800)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb800)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb700)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc004437aa0, 0xc0134fb700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0134f9980, 0xc004ab2220, 0x7374040, 0xc004437aa0, 0xc0134fb700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53256]
I0513 18:40:25.594810  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.582169ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.595206  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0513 18:40:25.613857  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.718368ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.634483  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.352437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.634718  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0513 18:40:25.645239  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.645275  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.645480  107570 wrap.go:47] GET /healthz: (1.469048ms) 500
goroutine 30499 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0135f4bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0135f4bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011025380, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc004437c58, 0xc0135503c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc004437c58, 0xc013618c00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc004437c58, 0xc013618c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc004437c58, 0xc013618c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc004437c58, 0xc013618c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc004437c58, 0xc013618c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc004437c58, 0xc013618c00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc004437c58, 0xc013618c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc004437c58, 0xc013618c00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc004437c58, 0xc013618c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc004437c58, 0xc013618c00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc004437c58, 0xc013618c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc004437c58, 0xc013618b00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc004437c58, 0xc013618b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013614960, 0xc004ab2220, 0x7374040, 0xc004437c58, 0xc013618b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.653959  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.933341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.674549  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.674585  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.674750  107570 wrap.go:47] GET /healthz: (1.54935ms) 500
goroutine 30530 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013508bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013508bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010ef1b20, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931ce78, 0xc0062b1540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931ce78, 0xc013517a00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931ce78, 0xc013517a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931ce78, 0xc013517a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931ce78, 0xc013517a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931ce78, 0xc013517a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931ce78, 0xc013517a00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931ce78, 0xc013517a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931ce78, 0xc013517a00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931ce78, 0xc013517a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931ce78, 0xc013517a00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931ce78, 0xc013517a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931ce78, 0xc013517900)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931ce78, 0xc013517900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0136105a0, 0xc004ab2220, 0x7374040, 0xc00931ce78, 0xc013517900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:25.675207  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.927127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.675425  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0513 18:40:25.693489  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.46583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.715383  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.227719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.716020  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0513 18:40:25.733529  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.448629ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.745358  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.745410  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.745689  107570 wrap.go:47] GET /healthz: (1.644875ms) 500
goroutine 30562 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013299b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013299b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010fd6600, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0134d0340, 0xc013145540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0134d0340, 0xc013698000)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0134d0340, 0xc013698000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0134d0340, 0xc013698000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0134d0340, 0xc013698000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0134d0340, 0xc013698000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0134d0340, 0xc013698000)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0134d0340, 0xc013698000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0134d0340, 0xc013698000)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0134d0340, 0xc013698000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0134d0340, 0xc013698000)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0134d0340, 0xc013698000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0134d0340, 0xc0134d3f00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0134d0340, 0xc0134d3f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0135dc2a0, 0xc004ab2220, 0x7374040, 0xc0134d0340, 0xc0134d3f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.754176  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.127948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.754445  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0513 18:40:25.773509  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.412082ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.773996  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.774024  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.774181  107570 wrap.go:47] GET /healthz: (1.266891ms) 500
goroutine 30554 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012d130a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012d130a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00bcad940, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0079be630, 0xc0036c7680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0079be630, 0xc013687200)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0079be630, 0xc013687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0079be630, 0xc013687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079be630, 0xc013687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079be630, 0xc013687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0079be630, 0xc013687200)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0079be630, 0xc013687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0079be630, 0xc013687200)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0079be630, 0xc013687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0079be630, 0xc013687200)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0079be630, 0xc013687200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0079be630, 0xc013687100)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0079be630, 0xc013687100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013682ba0, 0xc004ab2220, 0x7374040, 0xc0079be630, 0xc013687100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:25.795302  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.92846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.795583  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0513 18:40:25.813490  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.273634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.834224  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.073437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.834492  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0513 18:40:25.845026  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.845057  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.845237  107570 wrap.go:47] GET /healthz: (1.320613ms) 500
goroutine 30571 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0136ea3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0136ea3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010fd7d00, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0134d0480, 0xc013145a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0134d0480, 0xc013698e00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0134d0480, 0xc013698e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0134d0480, 0xc013698e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0134d0480, 0xc013698e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0134d0480, 0xc013698e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0134d0480, 0xc013698e00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0134d0480, 0xc013698e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0134d0480, 0xc013698e00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0134d0480, 0xc013698e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0134d0480, 0xc013698e00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0134d0480, 0xc013698e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0134d0480, 0xc013698d00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0134d0480, 0xc013698d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0135dc9c0, 0xc004ab2220, 0x7374040, 0xc0134d0480, 0xc013698d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.853511  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.554218ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.874319  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.874457  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.874520  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.43664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.874708  107570 wrap.go:47] GET /healthz: (1.889536ms) 500
goroutine 30533 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013509260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013509260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01105a6a0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931cf70, 0xc013550a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa500)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa500)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa500)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa500)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa400)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931cf70, 0xc0136fa400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013610ba0, 0xc004ab2220, 0x7374040, 0xc00931cf70, 0xc0136fa400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53256]
I0513 18:40:25.875020  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0513 18:40:25.894222  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.078534ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.915306  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.775237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.915597  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0513 18:40:25.933638  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.596104ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.944929  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.944964  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.945142  107570 wrap.go:47] GET /healthz: (1.166879ms) 500
goroutine 30501 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0135f5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0135f5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011025f40, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc004437d58, 0xc013551040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc004437d58, 0xc013619600)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc004437d58, 0xc013619600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc004437d58, 0xc013619600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc004437d58, 0xc013619600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc004437d58, 0xc013619600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc004437d58, 0xc013619600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc004437d58, 0xc013619600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc004437d58, 0xc013619600)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc004437d58, 0xc013619600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc004437d58, 0xc013619600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc004437d58, 0xc013619600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc004437d58, 0xc013619500)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc004437d58, 0xc013619500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013614f00, 0xc004ab2220, 0x7374040, 0xc004437d58, 0xc013619500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.954262  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.266138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.954787  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0513 18:40:25.973743  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.655797ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:25.974635  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:25.974672  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:25.974839  107570 wrap.go:47] GET /healthz: (1.097026ms) 500
goroutine 30504 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0135f5650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0135f5650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0111b4800, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc004437e98, 0xc0137b4000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc004437e98, 0xc0137a8300)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc004437e98, 0xc0137a8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc004437e98, 0xc0137a8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc004437e98, 0xc0137a8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc004437e98, 0xc0137a8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc004437e98, 0xc0137a8300)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc004437e98, 0xc0137a8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc004437e98, 0xc0137a8300)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc004437e98, 0xc0137a8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc004437e98, 0xc0137a8300)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc004437e98, 0xc0137a8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc004437e98, 0xc0137a8200)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc004437e98, 0xc0137a8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0136154a0, 0xc004ab2220, 0x7374040, 0xc004437e98, 0xc0137a8200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:25.994353  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.964969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:25.994719  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0513 18:40:26.013839  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.82684ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.038644  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.597543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.038971  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0513 18:40:26.044807  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.044900  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.045161  107570 wrap.go:47] GET /healthz: (1.355373ms) 500
goroutine 30647 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013509b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013509b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01105bd80, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931d118, 0xc00d701900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931d118, 0xc0136fbe00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931d118, 0xc0136fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931d118, 0xc0136fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931d118, 0xc0136fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931d118, 0xc0136fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931d118, 0xc0136fbe00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931d118, 0xc0136fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931d118, 0xc0136fbe00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931d118, 0xc0136fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931d118, 0xc0136fbe00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931d118, 0xc0136fbe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931d118, 0xc0136fbd00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931d118, 0xc0136fbd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013611740, 0xc004ab2220, 0x7374040, 0xc00931d118, 0xc0136fbd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.054461  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (2.20502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.074348  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.010059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.074539  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.074576  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.074604  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0513 18:40:26.074927  107570 wrap.go:47] GET /healthz: (2.046956ms) 500
goroutine 30641 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0137aa540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0137aa540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01116b8c0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0079be948, 0xc004439040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0079be948, 0xc0137d0d00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0079be948, 0xc0137d0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0079be948, 0xc0137d0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079be948, 0xc0137d0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079be948, 0xc0137d0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0079be948, 0xc0137d0d00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0079be948, 0xc0137d0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0079be948, 0xc0137d0d00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0079be948, 0xc0137d0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0079be948, 0xc0137d0d00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0079be948, 0xc0137d0d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0079be948, 0xc0137d0c00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0079be948, 0xc0137d0c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0137b2960, 0xc004ab2220, 0x7374040, 0xc0079be948, 0xc0137d0c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53256]
I0513 18:40:26.093363  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.31786ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.113753  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.707231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.114045  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0513 18:40:26.133403  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.284786ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.144719  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.144752  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.144963  107570 wrap.go:47] GET /healthz: (1.053442ms) 500
goroutine 30659 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0133ff5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0133ff5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010f41660, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00fcde868, 0xc0137b4500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00fcde868, 0xc01365e400)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00fcde868, 0xc01365e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00fcde868, 0xc01365e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00fcde868, 0xc01365e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00fcde868, 0xc01365e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00fcde868, 0xc01365e400)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00fcde868, 0xc01365e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00fcde868, 0xc01365e400)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00fcde868, 0xc01365e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00fcde868, 0xc01365e400)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00fcde868, 0xc01365e400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00fcde868, 0xc01365e300)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00fcde868, 0xc01365e300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01353b440, 0xc004ab2220, 0x7374040, 0xc00fcde868, 0xc01365e300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.154031  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.07003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.154480  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0513 18:40:26.174072  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.174130  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.174294  107570 wrap.go:47] GET /healthz: (1.21632ms) 500
goroutine 30678 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0137aacb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0137aacb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011248880, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0079bea80, 0xc013886280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1a00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1a00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1a00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1a00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1900)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0079bea80, 0xc0137d1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0137b3320, 0xc004ab2220, 0x7374040, 0xc0079bea80, 0xc0137d1900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:26.174714  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.650801ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.194342  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.2636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.194576  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0513 18:40:26.213142  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.189171ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.238927  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.846671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.239149  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0513 18:40:26.245295  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.245329  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.245491  107570 wrap.go:47] GET /healthz: (1.197256ms) 500
goroutine 30690 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013834150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013834150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0112c2200, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0137ca130, 0xc013551680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9400)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9400)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9400)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9400)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9300)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0137ca130, 0xc0137a9300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013615f20, 0xc004ab2220, 0x7374040, 0xc0137ca130, 0xc0137a9300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.253553  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.15255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.274176  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.274238  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.274425  107570 wrap.go:47] GET /healthz: (1.452588ms) 500
goroutine 30661 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0133ffb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0133ffb90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011276120, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00fcde940, 0xc0135c28c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00fcde940, 0xc01365e800)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00fcde940, 0xc01365e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00fcde940, 0xc01365e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00fcde940, 0xc01365e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00fcde940, 0xc01365e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00fcde940, 0xc01365e800)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00fcde940, 0xc01365e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00fcde940, 0xc01365e800)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00fcde940, 0xc01365e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00fcde940, 0xc01365e800)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00fcde940, 0xc01365e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00fcde940, 0xc01365e700)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00fcde940, 0xc01365e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01353b800, 0xc004ab2220, 0x7374040, 0xc00fcde940, 0xc01365e700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:26.274692  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.456909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.274938  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0513 18:40:26.295368  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.906225ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.313898  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.905367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.314138  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0513 18:40:26.334888  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (2.860013ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.344858  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.344891  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.345082  107570 wrap.go:47] GET /healthz: (1.173704ms) 500
goroutine 30663 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0133ffe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0133ffe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0112765a0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00fcde990, 0xc013886a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00fcde990, 0xc01365ed00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00fcde990, 0xc01365ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00fcde990, 0xc01365ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00fcde990, 0xc01365ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00fcde990, 0xc01365ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00fcde990, 0xc01365ed00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00fcde990, 0xc01365ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00fcde990, 0xc01365ed00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00fcde990, 0xc01365ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00fcde990, 0xc01365ed00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00fcde990, 0xc01365ed00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00fcde990, 0xc01365ec00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00fcde990, 0xc01365ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01353bb00, 0xc004ab2220, 0x7374040, 0xc00fcde990, 0xc01365ec00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.353798  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.794502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.354122  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0513 18:40:26.373187  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.196364ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.373729  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.373756  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.373969  107570 wrap.go:47] GET /healthz: (1.09218ms) 500
goroutine 30591 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0136abdc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0136abdc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0112c8e40, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc013658670, 0xc0135c3040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc013658670, 0xc0138e6d00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc013658670, 0xc0138e6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc013658670, 0xc0138e6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc013658670, 0xc0138e6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc013658670, 0xc0138e6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc013658670, 0xc0138e6d00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc013658670, 0xc0138e6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc013658670, 0xc0138e6d00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc013658670, 0xc0138e6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc013658670, 0xc0138e6d00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc013658670, 0xc0138e6d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc013658670, 0xc0138e6c00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc013658670, 0xc0138e6c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0138a6ea0, 0xc004ab2220, 0x7374040, 0xc013658670, 0xc0138e6c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:26.393906  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.76765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.394146  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0513 18:40:26.413494  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.402687ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.434569  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.39004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.434890  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0513 18:40:26.444928  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.444971  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.445142  107570 wrap.go:47] GET /healthz: (1.233504ms) 500
goroutine 30711 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0139662a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0139662a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011303ea0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc010baee70, 0xc0135c3540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc010baee70, 0xc013855700)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc010baee70, 0xc013855700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc010baee70, 0xc013855700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc010baee70, 0xc013855700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc010baee70, 0xc013855700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc010baee70, 0xc013855700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc010baee70, 0xc013855700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc010baee70, 0xc013855700)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc010baee70, 0xc013855700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc010baee70, 0xc013855700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc010baee70, 0xc013855700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc010baee70, 0xc013855600)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc010baee70, 0xc013855600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01394a9c0, 0xc004ab2220, 0x7374040, 0xc010baee70, 0xc013855600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.454453  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.258982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.473963  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.473998  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.474153  107570 wrap.go:47] GET /healthz: (1.235704ms) 500
goroutine 30713 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013966380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013966380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0113da560, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc010baeea8, 0xc013886f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc010baeea8, 0xc013855d00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc010baeea8, 0xc013855d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc010baeea8, 0xc013855d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc010baeea8, 0xc013855d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc010baeea8, 0xc013855d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc010baeea8, 0xc013855d00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc010baeea8, 0xc013855d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc010baeea8, 0xc013855d00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc010baeea8, 0xc013855d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc010baeea8, 0xc013855d00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc010baeea8, 0xc013855d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc010baeea8, 0xc013855c00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc010baeea8, 0xc013855c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01394afc0, 0xc004ab2220, 0x7374040, 0xc010baeea8, 0xc013855c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53256]
I0513 18:40:26.479643  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.598822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.479945  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0513 18:40:26.493167  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.174724ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.514251  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.171135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.514486  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0513 18:40:26.533451  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.361358ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.545052  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.545089  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.545270  107570 wrap.go:47] GET /healthz: (1.300115ms) 500
goroutine 30657 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0138bc9a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0138bc9a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01132c1e0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931d388, 0xc0139ea140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931d388, 0xc013899f00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931d388, 0xc013899f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931d388, 0xc013899f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931d388, 0xc013899f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931d388, 0xc013899f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931d388, 0xc013899f00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931d388, 0xc013899f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931d388, 0xc013899f00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931d388, 0xc013899f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931d388, 0xc013899f00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931d388, 0xc013899f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931d388, 0xc013899e00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931d388, 0xc013899e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0138c6960, 0xc004ab2220, 0x7374040, 0xc00931d388, 0xc013899e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.554629  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.639162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.555079  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0513 18:40:26.573355  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.289389ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.573739  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.573792  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.573998  107570 wrap.go:47] GET /healthz: (1.119455ms) 500
goroutine 30755 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01393f1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01393f1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011401a20, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00fcded28, 0xc0139ea780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e200)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e200)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e200)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e200)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e100)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00fcded28, 0xc013a1e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013947140, 0xc004ab2220, 0x7374040, 0xc00fcded28, 0xc013a1e100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53256]
I0513 18:40:26.594861  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.754127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.595150  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0513 18:40:26.613308  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.249038ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.634111  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.01652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.634364  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0513 18:40:26.645186  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.645218  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.646801  107570 wrap.go:47] GET /healthz: (1.483261ms) 500
goroutine 30774 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0138bd420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0138bd420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01132d7c0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931d4f0, 0xc0137b4b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4d00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4d00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4d00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4d00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4c00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931d4f0, 0xc0139e4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0138c7020, 0xc004ab2220, 0x7374040, 0xc00931d4f0, 0xc0139e4c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.652946  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.041026ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.674023  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.971832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.674214  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.674244  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.675895  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0513 18:40:26.676159  107570 wrap.go:47] GET /healthz: (3.325444ms) 500
goroutine 30700 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013835110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013835110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0114051c0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a7a000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01600)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01600)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01500)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0137ca3e0, 0xc013a01500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0138d56e0, 0xc004ab2220, 0x7374040, 0xc0137ca3e0, 0xc013a01500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:26.693693  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.624668ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.713969  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.925508ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.714225  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0513 18:40:26.733163  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.113925ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.744770  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.744838  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.745048  107570 wrap.go:47] GET /healthz: (1.116793ms) 500
goroutine 30728 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0139ca5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0139ca5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011468a60, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0079bee20, 0xc013887400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5400)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5400)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5400)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5400)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5300)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0079bee20, 0xc0139c5300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0138b9e60, 0xc004ab2220, 0x7374040, 0xc0079bee20, 0xc0139c5300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.753709  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.634821ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.753992  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0513 18:40:26.773191  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.184868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:26.773799  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.773887  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.774080  107570 wrap.go:47] GET /healthz: (1.151402ms) 500
goroutine 30786 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013835650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013835650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011524220, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0137ca4c0, 0xc0139eadc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4700)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4700)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4600)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0137ca4c0, 0xc013ae4600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013aac5a0, 0xc004ab2220, 0x7374040, 0xc0137ca4c0, 0xc013ae4600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53256]
I0513 18:40:26.794220  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.170162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.794533  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0513 18:40:26.813583  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.511831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.834297  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.198989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.834548  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0513 18:40:26.845040  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.845088  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.845269  107570 wrap.go:47] GET /healthz: (1.364401ms) 500
goroutine 30764 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013b18380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013b18380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0115446c0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00fcdefb0, 0xc0135c3b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00fcdefb0, 0xc013b38000)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00fcdefb0, 0xc013b38000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00fcdefb0, 0xc013b38000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00fcdefb0, 0xc013b38000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00fcdefb0, 0xc013b38000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00fcdefb0, 0xc013b38000)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00fcdefb0, 0xc013b38000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00fcdefb0, 0xc013b38000)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00fcdefb0, 0xc013b38000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00fcdefb0, 0xc013b38000)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00fcdefb0, 0xc013b38000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00fcdefb0, 0xc013a1ff00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00fcdefb0, 0xc013a1ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013af63c0, 0xc004ab2220, 0x7374040, 0xc00fcdefb0, 0xc013a1ff00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.853418  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.432114ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.874018  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.874053  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.874200  107570 wrap.go:47] GET /healthz: (1.303447ms) 500
goroutine 30745 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013979420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013979420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01158a000, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0136589e8, 0xc013a7a500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0136589e8, 0xc01399d800)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0136589e8, 0xc01399d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0136589e8, 0xc01399d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0136589e8, 0xc01399d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0136589e8, 0xc01399d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0136589e8, 0xc01399d800)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0136589e8, 0xc01399d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0136589e8, 0xc01399d800)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0136589e8, 0xc01399d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0136589e8, 0xc01399d800)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0136589e8, 0xc01399d800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0136589e8, 0xc01399d700)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0136589e8, 0xc01399d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013aa2960, 0xc004ab2220, 0x7374040, 0xc0136589e8, 0xc01399d700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:26.874670  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.571027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.875004  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0513 18:40:26.893178  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.157205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.914319  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.253254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.914630  107570 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0513 18:40:26.933115  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.142334ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.935089  107570 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.35013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.944844  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.944874  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.945032  107570 wrap.go:47] GET /healthz: (1.179487ms) 500
goroutine 30720 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013966a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013966a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0113dbf60, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc010baf008, 0xc013887a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc010baf008, 0xc013b53100)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc010baf008, 0xc013b53100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc010baf008, 0xc013b53100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc010baf008, 0xc013b53100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc010baf008, 0xc013b53100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc010baf008, 0xc013b53100)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc010baf008, 0xc013b53100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc010baf008, 0xc013b53100)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc010baf008, 0xc013b53100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc010baf008, 0xc013b53100)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc010baf008, 0xc013b53100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc010baf008, 0xc013b53000)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc010baf008, 0xc013b53000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01394b9e0, 0xc004ab2220, 0x7374040, 0xc010baf008, 0xc013b53000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.953965  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.049844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.954309  107570 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0513 18:40:26.973280  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.275896ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.973755  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:26.973783  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:26.973967  107570 wrap.go:47] GET /healthz: (1.033696ms) 500
goroutine 30796 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013b9c1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013b9c1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0115ac680, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0137ca6a8, 0xc0139eb2c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96700)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96700)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96600)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0137ca6a8, 0xc013b96600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013aad980, 0xc004ab2220, 0x7374040, 0xc0137ca6a8, 0xc013b96600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:26.975051  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.114126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.994091  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.934221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:26.994371  107570 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0513 18:40:27.013643  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.556466ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.015478  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.336385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.035947  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.08342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.036255  107570 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0513 18:40:27.044897  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:27.044930  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:27.046851  107570 wrap.go:47] GET /healthz: (2.935002ms) 500
goroutine 30812 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0136eb960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0136eb960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011629700, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0134d0850, 0xc013c26000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fe00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fe00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fe00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fe00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fd00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0134d0850, 0xc013b7fd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013bca780, 0xc004ab2220, 0x7374040, 0xc0134d0850, 0xc013b7fd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.053181  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.231804ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.055146  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.417781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.073768  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.815489ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.073806  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:27.073873  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:27.074053  107570 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0513 18:40:27.074058  107570 wrap.go:47] GET /healthz: (1.241453ms) 500
goroutine 30820 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0139cbab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0139cbab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01156bf20, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0079bf208, 0xc013c6e140, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0079bf208, 0xc013c56600)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0079bf208, 0xc013c56600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0079bf208, 0xc013c56600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079bf208, 0xc013c56600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079bf208, 0xc013c56600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0079bf208, 0xc013c56600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0079bf208, 0xc013c56600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0079bf208, 0xc013c56600)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0079bf208, 0xc013c56600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0079bf208, 0xc013c56600)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0079bf208, 0xc013c56600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0079bf208, 0xc013c56500)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0079bf208, 0xc013c56500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013ac1380, 0xc004ab2220, 0x7374040, 0xc0079bf208, 0xc013c56500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:27.093186  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.139519ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.095069  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.30481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.115219  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.292591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.115537  107570 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0513 18:40:27.133327  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.293902ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.135093  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.212369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.144573  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:27.144602  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:27.144852  107570 wrap.go:47] GET /healthz: (1.0157ms) 500
goroutine 30783 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013c3c930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013c3c930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011b4e000, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc00931d880, 0xc010a228c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc00931d880, 0xc013c13b00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc00931d880, 0xc013c13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc00931d880, 0xc013c13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931d880, 0xc013c13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc00931d880, 0xc013c13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc00931d880, 0xc013c13b00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc00931d880, 0xc013c13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc00931d880, 0xc013c13b00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc00931d880, 0xc013c13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc00931d880, 0xc013c13b00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc00931d880, 0xc013c13b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc00931d880, 0xc013c13a00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc00931d880, 0xc013c13a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013c3ed80, 0xc004ab2220, 0x7374040, 0xc00931d880, 0xc013c13a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.153773  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.834916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.154005  107570 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0513 18:40:27.173246  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.2264ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.173762  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:27.173785  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:27.175537  107570 wrap.go:47] GET /healthz: (2.645769ms) 500
goroutine 30847 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013cee070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013cee070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011a5b7a0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc010baf328, 0xc010a22dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc010baf328, 0xc013cf8300)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc010baf328, 0xc013cf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc010baf328, 0xc013cf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc010baf328, 0xc013cf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc010baf328, 0xc013cf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc010baf328, 0xc013cf8300)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc010baf328, 0xc013cf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc010baf328, 0xc013cf8300)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc010baf328, 0xc013cf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc010baf328, 0xc013cf8300)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc010baf328, 0xc013cf8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc010baf328, 0xc013cf8200)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc010baf328, 0xc013cf8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013c0b680, 0xc004ab2220, 0x7374040, 0xc010baf328, 0xc013cf8200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53256]
I0513 18:40:27.176569  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.832618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.194960  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.826263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.195228  107570 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0513 18:40:27.213319  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.304952ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.215075  107570 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.296665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.234998  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.548793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.235637  107570 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0513 18:40:27.244797  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:27.244858  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:27.245009  107570 wrap.go:47] GET /healthz: (1.125236ms) 500
goroutine 30883 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013d223f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013d223f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0116e3aa0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0134d09f8, 0xc013c6e780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45700)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45700)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45700)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45600)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0134d09f8, 0xc013c45600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013bcb620, 0xc004ab2220, 0x7374040, 0xc0134d09f8, 0xc013c45600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.253191  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.199702ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.254900  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.21845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.274442  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:27.274496  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:27.274612  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.552659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.274694  107570 wrap.go:47] GET /healthz: (1.581135ms) 500
goroutine 30829 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013d2e8c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013d2e8c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011c376e0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0079bf3f8, 0xc010a23540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0079bf3f8, 0xc013d8a000)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0079bf3f8, 0xc013d8a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0079bf3f8, 0xc013d8a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079bf3f8, 0xc013d8a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079bf3f8, 0xc013d8a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0079bf3f8, 0xc013d8a000)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0079bf3f8, 0xc013d8a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0079bf3f8, 0xc013d8a000)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0079bf3f8, 0xc013d8a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0079bf3f8, 0xc013d8a000)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0079bf3f8, 0xc013d8a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0079bf3f8, 0xc013c57f00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0079bf3f8, 0xc013c57f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013d80300, 0xc004ab2220, 0x7374040, 0xc0079bf3f8, 0xc013c57f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53256]
I0513 18:40:27.274906  107570 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0513 18:40:27.293511  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.477537ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.296075  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.819879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.314145  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.067875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.314399  107570 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0513 18:40:27.335288  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (3.170175ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.337330  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.542002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.344687  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:27.344850  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:27.345010  107570 wrap.go:47] GET /healthz: (1.165141ms) 500
goroutine 30889 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013d231f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013d231f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011d356e0, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0134d0b00, 0xc013c6ec80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68a00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68a00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68a00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68a00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68900)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0134d0b00, 0xc013d68900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013bcbec0, 0xc004ab2220, 0x7374040, 0xc0134d0b00, 0xc013d68900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.354118  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.193943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.354323  107570 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0513 18:40:27.373412  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.415435ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.373792  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:27.373828  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:27.373974  107570 wrap.go:47] GET /healthz: (1.043112ms) 500
goroutine 30903 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013b9d5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013b9d5e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011bdb520, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0137ca988, 0xc013c6f180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe500)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe500)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe500)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe500)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe400)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0137ca988, 0xc013dbe400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013cb0b40, 0xc004ab2220, 0x7374040, 0xc0137ca988, 0xc013dbe400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:27.375506  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.324783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.413315  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (21.04346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.413698  107570 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0513 18:40:27.415235  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.259611ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.417191  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.459327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.434160  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.104638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.434384  107570 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0513 18:40:27.444877  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:27.444914  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:27.445081  107570 wrap.go:47] GET /healthz: (1.131981ms) 500
goroutine 30930 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013305dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013305dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011ce7c20, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0047994a8, 0xc010a23b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0047994a8, 0xc013e36800)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0047994a8, 0xc013e36800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0047994a8, 0xc013e36800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0047994a8, 0xc013e36800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0047994a8, 0xc013e36800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0047994a8, 0xc013e36800)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0047994a8, 0xc013e36800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0047994a8, 0xc013e36800)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0047994a8, 0xc013e36800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0047994a8, 0xc013e36800)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0047994a8, 0xc013e36800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0047994a8, 0xc013e36700)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0047994a8, 0xc013e36700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013343b00, 0xc004ab2220, 0x7374040, 0xc0047994a8, 0xc013e36700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.453373  107570 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.405911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.455004  107570 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.197453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.475415  107570 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0513 18:40:27.475454  107570 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0513 18:40:27.475643  107570 wrap.go:47] GET /healthz: (2.742028ms) 500
goroutine 30920 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013d2fa40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013d2fa40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011d8dd20, 0x1f4)
net/http.Error(0x7fb1c84ad738, 0xc0079bf5f0, 0xc0139eb900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8be00)
net/http.HandlerFunc.ServeHTTP(0xc00bf31380, 0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0128beb80, 0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4356850, 0xe, 0xc00d294ab0, 0xc00b29b2d0, 0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8be00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbcc80, 0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8be00)
net/http.HandlerFunc.ServeHTTP(0xc00b85f680, 0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8be00)
net/http.HandlerFunc.ServeHTTP(0xc00bcbccc0, 0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8bd00)
net/http.HandlerFunc.ServeHTTP(0xc00fc94780, 0x7fb1c84ad738, 0xc0079bf5f0, 0xc013d8bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013d81560, 0xc004ab2220, 0x7374040, 0xc0079bf5f0, 0xc013d8bd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:53258]
I0513 18:40:27.476156  107570 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.083346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.476446  107570 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0513 18:40:27.545199  107570 wrap.go:47] GET /healthz: (1.176021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.546951  107570 wrap.go:47] GET /api/v1/namespaces/default: (1.304234ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.549150  107570 wrap.go:47] POST /api/v1/namespaces: (1.752974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.550691  107570 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.064411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.554797  107570 wrap.go:47] POST /api/v1/namespaces/default/services: (3.6493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.556223  107570 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.009119ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.558325  107570 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.699655ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.574068  107570 wrap.go:47] GET /healthz: (1.085697ms) 200 [Go-http-client/1.1 127.0.0.1:53256]
W0513 18:40:27.574864  107570 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 18:40:27.574934  107570 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 18:40:27.574957  107570 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 18:40:27.574972  107570 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 18:40:27.574987  107570 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 18:40:27.575001  107570 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 18:40:27.575016  107570 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 18:40:27.575037  107570 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 18:40:27.575052  107570 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0513 18:40:27.575095  107570 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0513 18:40:27.575190  107570 factory.go:337] Creating scheduler from algorithm provider 'DefaultProvider'
I0513 18:40:27.575207  107570 factory.go:418] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0513 18:40:27.575397  107570 controller_utils.go:1029] Waiting for caches to sync for scheduler controller
I0513 18:40:27.575618  107570 reflector.go:122] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:209
I0513 18:40:27.575635  107570 reflector.go:160] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:209
I0513 18:40:27.576723  107570 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (672.757µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0513 18:40:27.577497  107570 get.go:250] Starting watch for /api/v1/pods, rv=24290 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m40s
I0513 18:40:27.675583  107570 shared_informer.go:175] caches populated
I0513 18:40:27.675640  107570 controller_utils.go:1036] Caches are synced for scheduler controller
I0513 18:40:27.676062  107570 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.676094  107570 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.676503  107570 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.676522  107570 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.677005  107570 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.677032  107570 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.677083  107570 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (673.544µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.677426  107570 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.677453  107570 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.677995  107570 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=24291 labels= fields= timeout=8m48s
I0513 18:40:27.678132  107570 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.678147  107570 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.678166  107570 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (666.384µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53316]
I0513 18:40:27.678182  107570 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (475.728µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0513 18:40:27.678410  107570 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.678423  107570 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.678431  107570 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (457.241µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53318]
I0513 18:40:27.678765  107570 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.678777  107570 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.678850  107570 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=24290 labels= fields= timeout=7m32s
I0513 18:40:27.679157  107570 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.679170  107570 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.679429  107570 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (652.408µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53320]
I0513 18:40:27.679693  107570 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=24291 labels= fields= timeout=7m47s
I0513 18:40:27.680403  107570 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (347.486µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53328]
I0513 18:40:27.680746  107570 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=24290 labels= fields= timeout=8m29s
I0513 18:40:27.681063  107570 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (1.900173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53324]
I0513 18:40:27.683063  107570 get.go:250] Starting watch for /api/v1/services, rv=24498 labels= fields= timeout=5m43s
I0513 18:40:27.683427  107570 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=24290 labels= fields= timeout=7m50s
I0513 18:40:27.683657  107570 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=24291 labels= fields= timeout=8m54s
I0513 18:40:27.683662  107570 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (333.467µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53324]
I0513 18:40:27.683975  107570 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.683997  107570 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0513 18:40:27.684786  107570 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=24290 labels= fields= timeout=9m33s
I0513 18:40:27.684885  107570 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (587.134µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53326]
I0513 18:40:27.685480  107570 get.go:250] Starting watch for /api/v1/nodes, rv=24290 labels= fields= timeout=6m34s
I0513 18:40:27.775942  107570 shared_informer.go:175] caches populated
I0513 18:40:27.876165  107570 shared_informer.go:175] caches populated
I0513 18:40:27.977155  107570 shared_informer.go:175] caches populated
I0513 18:40:28.077358  107570 shared_informer.go:175] caches populated
I0513 18:40:28.177629  107570 shared_informer.go:175] caches populated
I0513 18:40:28.277919  107570 shared_informer.go:175] caches populated
I0513 18:40:28.378122  107570 shared_informer.go:175] caches populated
I0513 18:40:28.482005  107570 shared_informer.go:175] caches populated
I0513 18:40:28.582216  107570 shared_informer.go:175] caches populated
I0513 18:40:28.677754  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:28.678763  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:28.681609  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:28.682418  107570 shared_informer.go:175] caches populated
I0513 18:40:28.684272  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:28.685197  107570 wrap.go:47] POST /api/v1/nodes: (2.078227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53956]
I0513 18:40:28.685338  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:28.689233  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (3.089605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53956]
I0513 18:40:28.689588  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0
I0513 18:40:28.689610  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0
I0513 18:40:28.689780  107570 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0", node "node1"
I0513 18:40:28.689802  107570 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0", node "node1": all PVCs bound and nothing to do
I0513 18:40:28.689876  107570 factory.go:711] Attempting to bind rpod-0 to node1
I0513 18:40:28.691441  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.750779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53956]
I0513 18:40:28.691900  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1
I0513 18:40:28.691918  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1
I0513 18:40:28.692028  107570 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1", node "node1"
I0513 18:40:28.692047  107570 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1", node "node1": all PVCs bound and nothing to do
I0513 18:40:28.692096  107570 factory.go:711] Attempting to bind rpod-1 to node1
I0513 18:40:28.692156  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0/binding: (1.817826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0513 18:40:28.692354  107570 scheduler.go:570] pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 18:40:28.694009  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-1/binding: (1.641974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53956]
I0513 18:40:28.694161  107570 scheduler.go:570] pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 18:40:28.694287  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.486877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0513 18:40:28.702085  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.04834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0513 18:40:28.794358  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0: (1.939983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0513 18:40:28.897344  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-1: (1.867377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0513 18:40:28.897763  107570 preemption_test.go:561] Creating the preemptor pod...
I0513 18:40:28.900248  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.108345ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0513 18:40:28.900497  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod
I0513 18:40:28.900526  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod
I0513 18:40:28.900546  107570 preemption_test.go:567] Creating additional pods...
I0513 18:40:28.900656  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:28.900712  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:28.903294  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.959854ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0513 18:40:28.904781  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (2.345663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0513 18:40:28.904888  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (4.061235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0513 18:40:28.905155  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod/status: (4.006388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53956]
I0513 18:40:28.907135  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.541812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53956]
I0513 18:40:28.907153  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.762601ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0513 18:40:28.907469  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0513 18:40:28.907584  107570 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0513 18:40:28.907598  107570 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0513 18:40:28.909383  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod/status: (1.457141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0513 18:40:28.909403  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.809905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0513 18:40:28.911187  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.379489ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0513 18:40:28.914632  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (3.046138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0513 18:40:28.914699  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0: (4.88164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0513 18:40:28.915022  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:28.915041  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:28.915183  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:28.915217  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:28.917376  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.291848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0513 18:40:28.917839  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (971.747µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0513 18:40:28.917883  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.666272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0513 18:40:28.918435  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0/status: (1.6351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53968]
I0513 18:40:28.919287  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.329874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0513 18:40:28.921040  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.671334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0513 18:40:28.921558  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (2.69183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53968]
I0513 18:40:28.921885  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:28.922062  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:28.922078  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:28.922197  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:28.922244  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:28.922781  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.586924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0513 18:40:28.923456  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (958.563µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0513 18:40:28.924578  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.514117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53972]
I0513 18:40:28.925028  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.770271ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0513 18:40:28.925382  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1/status: (2.424465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0513 18:40:28.927063  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.525044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53972]
I0513 18:40:28.927101  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (1.245077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0513 18:40:28.927377  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:28.927548  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:28.927566  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:28.927667  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:28.927709  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:28.928980  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.513038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53972]
I0513 18:40:28.929490  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (1.149502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0513 18:40:28.929632  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2/status: (1.645467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
E0513 18:40:28.929948  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:28.930361  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.566721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0513 18:40:28.930718  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.305732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53972]
I0513 18:40:28.933316  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (2.781103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0513 18:40:28.933605  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:28.933855  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:28.933869  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:28.933973  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:28.934007  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:28.934438  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (3.004162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0513 18:40:28.935315  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (1.096591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0513 18:40:28.936126  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3/status: (1.921391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0513 18:40:28.938039  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.472446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53978]
I0513 18:40:28.938915  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (1.025467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0513 18:40:28.939315  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (3.059214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0513 18:40:28.939502  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:28.939708  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:28.939724  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:28.939891  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:28.941487  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:28.944421  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (2.803661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53978]
I0513 18:40:28.944422  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.745708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0513 18:40:28.944615  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4/status: (2.647151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53980]
I0513 18:40:28.946240  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.709929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0513 18:40:28.947250  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.675052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53978]
I0513 18:40:28.948497  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (2.877916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0513 18:40:28.948703  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:28.949849  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:28.949868  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:28.949964  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:28.950026  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:28.950063  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.305543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53978]
I0513 18:40:28.952735  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (2.407789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0513 18:40:28.952921  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.452113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0513 18:40:28.953404  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5/status: (2.446716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53978]
I0513 18:40:28.954998  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (1.134744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0513 18:40:28.955349  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:28.955376  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.15926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53984]
I0513 18:40:28.955604  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:28.955627  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:28.955778  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:28.955850  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:28.958546  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.130069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53988]
I0513 18:40:28.958735  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6/status: (2.590697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0513 18:40:28.958889  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (2.53265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53986]
I0513 18:40:28.959019  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (3.100734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
E0513 18:40:28.959558  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:28.961280  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.641945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0513 18:40:28.961302  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (1.603439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0513 18:40:28.961907  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:28.964484  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:28.964538  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:28.964697  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:28.964771  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:28.978969  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (12.667331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53990]
I0513 18:40:28.978973  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (16.973969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0513 18:40:28.979003  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7/status: (12.768319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53992]
I0513 18:40:28.979145  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (13.832658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53988]
I0513 18:40:28.981643  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.997822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0513 18:40:28.983017  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (3.112068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53990]
I0513 18:40:28.983250  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:28.983452  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:28.983472  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:28.983563  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:28.983609  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:28.983786  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.637851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0513 18:40:28.986719  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (2.609074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0513 18:40:28.986913  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.569753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0513 18:40:28.986980  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8/status: (3.009417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53990]
I0513 18:40:28.986633  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.247297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53996]
I0513 18:40:28.988585  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (1.032348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53996]
I0513 18:40:28.988921  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:28.989049  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.707028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53994]
I0513 18:40:28.989157  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:28.989190  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:28.989279  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:28.989319  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:28.990805  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (1.026138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54002]
I0513 18:40:28.991404  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.48738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54004]
I0513 18:40:28.991954  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.48437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53996]
I0513 18:40:28.993942  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9/status: (4.387188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0513 18:40:28.994041  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.553616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54004]
I0513 18:40:28.995783  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (1.257483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54004]
I0513 18:40:28.996079  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:28.996233  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:28.996273  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:28.996446  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:28.996498  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:28.996889  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.348722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54002]
I0513 18:40:28.999119  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (2.123143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54006]
I0513 18:40:29.000199  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10/status: (3.432932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54004]
I0513 18:40:29.000261  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.009726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54002]
I0513 18:40:29.000199  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.817033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54008]
I0513 18:40:29.002490  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (1.168919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54004]
I0513 18:40:29.002778  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.003096  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:29.003175  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:29.003272  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.707306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54006]
I0513 18:40:29.003370  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.004100  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.006525  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (2.373067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54006]
I0513 18:40:29.008473  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (4.132429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54004]
I0513 18:40:29.008544  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-2.159e52226cfd9853: (2.745435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0513 18:40:29.008870  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (3.996599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54010]
I0513 18:40:29.010174  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.010467  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:29.010524  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:29.010710  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.010728  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.600902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54004]
I0513 18:40:29.010753  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.013619  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.332389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54016]
I0513 18:40:29.013623  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (2.546646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0513 18:40:29.014030  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11/status: (2.948198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54006]
I0513 18:40:29.014360  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.155621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0513 18:40:29.015471  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (1.064364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0513 18:40:29.015733  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.015984  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:29.016005  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:29.016137  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.016225  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.016362  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.466057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0513 18:40:29.018524  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (1.536917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54014]
I0513 18:40:29.018848  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12/status: (2.229452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0513 18:40:29.018968  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.056128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54016]
I0513 18:40:29.019038  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.656143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54018]
I0513 18:40:29.020399  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (1.08002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54016]
I0513 18:40:29.020629  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.020956  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:29.020977  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:29.021088  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.681558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0513 18:40:29.021098  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.021172  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.022623  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (1.009277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54012]
I0513 18:40:29.023001  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.269596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54022]
I0513 18:40:29.023172  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.501284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54020]
I0513 18:40:29.026037  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.432755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54022]
I0513 18:40:29.026762  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13/status: (5.37646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54016]
I0513 18:40:29.029600  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (2.04842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54016]
I0513 18:40:29.029877  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.030069  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:29.030238  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.47894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54022]
I0513 18:40:29.030500  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:29.030679  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.030726  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.032259  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.575445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54016]
I0513 18:40:29.032303  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (983.21µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54024]
I0513 18:40:29.033353  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14/status: (2.08239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54020]
I0513 18:40:29.034794  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.395827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54016]
I0513 18:40:29.035050  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (1.329955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54020]
I0513 18:40:29.035067  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.906497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54024]
I0513 18:40:29.035342  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.035531  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:29.035544  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:29.035669  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.035746  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.037078  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.515924ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54020]
I0513 18:40:29.038233  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15/status: (2.219197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54026]
I0513 18:40:29.039723  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.594325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54016]
I0513 18:40:29.042338  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (2.345211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0513 18:40:29.042539  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (2.296924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54026]
I0513 18:40:29.042657  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.851076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54020]
E0513 18:40:29.043038  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:29.043099  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.043257  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:29.043278  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:29.043499  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.043557  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.045780  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.416987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54026]
I0513 18:40:29.047208  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (1.443892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0513 18:40:29.049230  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.492719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54032]
I0513 18:40:29.049720  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (3.504001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54026]
I0513 18:40:29.050092  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16/status: (6.20408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0513 18:40:29.051574  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (1.08795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0513 18:40:29.051600  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.395725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54032]
I0513 18:40:29.051936  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.052199  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:29.052217  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:29.052316  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.052442  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.053846  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (1.086445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0513 18:40:29.054016  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.820945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0513 18:40:29.055903  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.72397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0513 18:40:29.055952  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17/status: (2.984852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54036]
I0513 18:40:29.057093  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.551353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0513 18:40:29.057637  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (989.884µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54036]
I0513 18:40:29.057905  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.058059  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:29.058079  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:29.058224  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.058274  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.063022  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (3.97081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0513 18:40:29.063020  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (5.254487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0513 18:40:29.064154  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18/status: (5.106574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54036]
I0513 18:40:29.064274  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (5.09992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54040]
I0513 18:40:29.065515  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.692158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0513 18:40:29.065752  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (1.120856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54036]
I0513 18:40:29.066702  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.066946  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:29.066965  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:29.067060  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.067106  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.068526  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (1.124012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0513 18:40:29.068843  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (1.577171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0513 18:40:29.069121  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.069291  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:29.069322  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:29.069401  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.069451  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.072057  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-6.159e52226eaa933c: (2.991565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54042]
I0513 18:40:29.072867  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (2.940737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0513 18:40:29.073025  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19/status: (3.160982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
E0513 18:40:29.073183  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:29.074084  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.623426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54042]
I0513 18:40:29.075988  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (2.487985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54028]
I0513 18:40:29.076301  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.076494  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:29.076512  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:29.076616  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.076672  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.077969  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (1.004404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0513 18:40:29.079035  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20/status: (2.070324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54042]
I0513 18:40:29.079292  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.954153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0513 18:40:29.080664  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (1.179806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54042]
I0513 18:40:29.080985  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.081146  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:29.081177  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:29.081287  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.081335  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.083118  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (1.463552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0513 18:40:29.083448  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21/status: (1.808316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0513 18:40:29.083709  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.697849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54046]
I0513 18:40:29.085033  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (1.021902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0513 18:40:29.085283  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.085546  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:29.085558  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:29.085709  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.085748  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.090579  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22/status: (4.574164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0513 18:40:29.091014  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (4.923431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0513 18:40:29.091136  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.272058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54048]
E0513 18:40:29.091308  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:29.093117  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (1.691646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0513 18:40:29.093365  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.093547  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:29.093562  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:29.093667  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.093717  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.096140  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (2.113802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0513 18:40:29.096633  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.222783ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
I0513 18:40:29.096812  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23/status: (2.747385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54030]
I0513 18:40:29.098330  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (1.0847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
I0513 18:40:29.098580  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.098746  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:29.098768  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:29.098937  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.098983  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.102107  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (2.853375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
I0513 18:40:29.102586  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24/status: (3.239252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0513 18:40:29.102720  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.300427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54052]
I0513 18:40:29.105053  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (1.955658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54044]
I0513 18:40:29.105339  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.105556  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:29.105572  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:29.105682  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.105728  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.107190  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (1.104189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
I0513 18:40:29.107838  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25/status: (1.835026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54052]
I0513 18:40:29.107998  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.677929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54054]
I0513 18:40:29.109417  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (1.149154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54052]
I0513 18:40:29.109741  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.109962  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:29.109985  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:29.110113  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.110163  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.111411  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (999.014µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
I0513 18:40:29.112623  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26/status: (2.243659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54052]
I0513 18:40:29.112926  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.234064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54056]
I0513 18:40:29.114074  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (971.399µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54052]
I0513 18:40:29.114356  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.114537  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:29.114553  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:29.114659  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.114701  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.116945  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (1.997852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
I0513 18:40:29.117778  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27/status: (2.83722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54056]
I0513 18:40:29.118595  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.344474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54058]
I0513 18:40:29.119348  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (1.131152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54056]
I0513 18:40:29.119611  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.119774  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:29.119796  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:29.119920  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.119964  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.122775  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28/status: (2.588387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54058]
I0513 18:40:29.123233  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (2.750196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
E0513 18:40:29.123517  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:29.123701  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.989462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54060]
I0513 18:40:29.124252  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.06708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54058]
I0513 18:40:29.124538  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.124699  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:29.124742  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:29.124868  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.124913  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.127711  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (2.522933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
I0513 18:40:29.129311  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29/status: (4.155384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54060]
I0513 18:40:29.130049  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.515238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54062]
I0513 18:40:29.130751  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (1.037116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54060]
I0513 18:40:29.131174  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.131323  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:29.131338  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:29.131467  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.131517  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.134317  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (2.539325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
I0513 18:40:29.135909  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30/status: (4.087887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54062]
I0513 18:40:29.135945  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.790656ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54064]
I0513 18:40:29.137479  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (1.097886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54062]
I0513 18:40:29.137789  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.138025  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:29.138070  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:29.138205  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.138265  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.139634  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (1.128116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54062]
I0513 18:40:29.140143  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.631451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
I0513 18:40:29.141428  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31/status: (2.620503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54066]
I0513 18:40:29.143090  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (1.200745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
I0513 18:40:29.143358  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.143534  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:29.143556  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:29.143698  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.143749  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.145161  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (1.113194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54062]
I0513 18:40:29.145913  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32/status: (1.78909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
I0513 18:40:29.146798  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.339062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54068]
I0513 18:40:29.147974  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (1.093872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54050]
I0513 18:40:29.148233  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.148390  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:29.148410  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:29.148568  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.148626  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.151021  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (2.010218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54068]
I0513 18:40:29.151980  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.955732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54062]
I0513 18:40:29.153181  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33/status: (1.860642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54068]
I0513 18:40:29.154669  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (1.028672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54062]
I0513 18:40:29.154987  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.155236  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:29.155256  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:29.155357  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.155455  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.157530  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (1.509259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54070]
I0513 18:40:29.158064  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34/status: (2.333255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54062]
I0513 18:40:29.158556  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.350871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54072]
I0513 18:40:29.159695  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (1.093678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54062]
I0513 18:40:29.160079  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.160262  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:29.160277  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:29.160366  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.160412  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.162307  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (1.596326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54070]
I0513 18:40:29.163313  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.144029ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54074]
I0513 18:40:29.163475  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35/status: (2.819199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54072]
I0513 18:40:29.165271  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (1.253352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54074]
I0513 18:40:29.165558  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.165790  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:29.165827  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:29.165984  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.166043  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.167657  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.177528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54074]
I0513 18:40:29.168974  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (2.50127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54070]
I0513 18:40:29.169205  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.648144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54078]
I0513 18:40:29.170476  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36/status: (2.547531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54076]
I0513 18:40:29.175521  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (4.468419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54070]
I0513 18:40:29.175973  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.176262  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:29.176286  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:29.176405  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.177352  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.179987  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (2.25688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54070]
I0513 18:40:29.180230  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.176544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54082]
I0513 18:40:29.180617  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37/status: (2.551037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54074]
I0513 18:40:29.182333  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (1.171333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54082]
I0513 18:40:29.182616  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.182861  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:29.182883  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:29.183048  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.183095  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.185185  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38/status: (1.848889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54082]
I0513 18:40:29.186310  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.52135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54084]
I0513 18:40:29.187571  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (1.116951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54082]
I0513 18:40:29.187596  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (3.981687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54070]
I0513 18:40:29.187854  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0513 18:40:29.187899  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:29.188057  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:29.188073  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:29.188185  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.188226  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.189796  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (1.101855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54086]
I0513 18:40:29.190075  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.586376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54084]
I0513 18:40:29.191295  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39/status: (2.809286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54082]
I0513 18:40:29.201227  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (8.994756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54084]
I0513 18:40:29.216936  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.217160  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:29.217174  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:29.217333  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.217386  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.221058  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.067665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54088]
I0513 18:40:29.221123  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (3.228169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54086]
I0513 18:40:29.224242  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40/status: (4.57949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54084]
I0513 18:40:29.226090  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (1.149717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54086]
I0513 18:40:29.226362  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.226563  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:29.226580  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:29.226701  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.226750  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.229292  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.710469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54090]
I0513 18:40:29.229803  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (2.414045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54088]
E0513 18:40:29.230143  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:29.230249  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41/status: (2.838116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54086]
I0513 18:40:29.232054  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (1.259906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54088]
I0513 18:40:29.232324  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.232604  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:29.232624  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:29.232738  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.232797  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.235533  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (1.050366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54092]
I0513 18:40:29.235718  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (1.496954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54088]
I0513 18:40:29.236042  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.236362  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:29.236411  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:29.236452  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-15.159e5222736e1eda: (2.178192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54090]
I0513 18:40:29.236562  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.236610  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.237762  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (945.368µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54088]
I0513 18:40:29.238782  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.611593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54094]
I0513 18:40:29.238864  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42/status: (1.865184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54092]
I0513 18:40:29.240243  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (934.076µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54094]
I0513 18:40:29.240553  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.240730  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:29.240745  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:29.240865  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.240904  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.242230  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (1.044107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54088]
I0513 18:40:29.242795  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43/status: (1.67516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54094]
I0513 18:40:29.242968  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.486427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54096]
I0513 18:40:29.244103  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (934.339µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54094]
I0513 18:40:29.244411  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.244550  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:29.244563  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:29.244656  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.244704  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.246659  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44/status: (1.703968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54096]
I0513 18:40:29.247179  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.859715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54098]
I0513 18:40:29.247962  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (2.993907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54088]
E0513 18:40:29.248233  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:29.248442  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (1.310869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54096]
I0513 18:40:29.248717  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.248903  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:29.248920  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:29.249009  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.249049  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.251858  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (2.248004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54098]
I0513 18:40:29.252039  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45/status: (2.747486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54088]
I0513 18:40:29.252260  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.571806ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54102]
I0513 18:40:29.253473  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (1.004623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54088]
I0513 18:40:29.253853  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.254011  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:29.254050  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:29.254135  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.254189  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.256549  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (1.016667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54098]
I0513 18:40:29.256860  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46/status: (2.451673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54102]
I0513 18:40:29.258090  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.233151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54104]
I0513 18:40:29.259086  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (956.353µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54102]
I0513 18:40:29.259370  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.259557  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:29.259576  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:29.259687  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.259733  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.261057  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (1.03968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54104]
I0513 18:40:29.261752  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47/status: (1.771155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54098]
I0513 18:40:29.261864  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.540424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54106]
I0513 18:40:29.263179  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (913.377µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54098]
I0513 18:40:29.263430  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.263615  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:29.263642  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:29.263801  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.263883  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.265057  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (942.168µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54098]
I0513 18:40:29.265621  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48/status: (1.43086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54104]
I0513 18:40:29.266949  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.393434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.269060  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (1.816943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54104]
I0513 18:40:29.269355  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.269502  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:29.269519  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:29.269580  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.269622  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.271001  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.212622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.271277  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (1.105098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54110]
I0513 18:40:29.271909  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49/status: (1.790985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54098]
I0513 18:40:29.272572  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.392722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54112]
I0513 18:40:29.273282  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (952.889µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54110]
I0513 18:40:29.273543  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.273725  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:29.273739  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:29.273810  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.273873  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.275036  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (1.003365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54112]
I0513 18:40:29.275094  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (931.205µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.275287  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.275439  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:29.275457  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:29.275568  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.275608  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.276224  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-19.159e522275706b91: (1.624002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54114]
I0513 18:40:29.276862  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (1.088765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54112]
I0513 18:40:29.278153  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (1.237869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.278472  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.278556  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-22.159e5222766924d7: (1.463303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54114]
I0513 18:40:29.278595  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:29.278614  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:29.278725  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.278761  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.280069  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.077878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54112]
I0513 18:40:29.280106  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.105149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.280344  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.280525  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:29.280543  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:29.280645  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.280765  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.281097  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-28.159e5222787337ed: (1.561824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54118]
I0513 18:40:29.281977  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (918.782µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54112]
I0513 18:40:29.282054  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (1.008949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.282299  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.282432  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:29.282505  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:29.282728  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.282775  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.283798  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-38.159e52227c368699: (2.068752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54118]
I0513 18:40:29.288162  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (4.58457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.288367  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (4.799745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54112]
I0513 18:40:29.288482  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.291111  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:29.291141  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:29.291289  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:29.291338  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:29.292438  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-41.159e52227ed0a0d7: (4.183659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54118]
I0513 18:40:29.292777  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (1.214469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54112]
I0513 18:40:29.292784  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (1.082959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.293075  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:29.294626  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-44.159e52227fe2aa19: (1.501993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54112]
I0513 18:40:29.371005  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.597045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.471545  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (2.183918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.570943  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.568738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.677744  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (6.908839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.677889  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:29.681791  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:29.683055  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:29.685178  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:29.686231  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:29.770938  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.566279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.871054  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.650693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:29.970949  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.614369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.071039  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.673609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.170738  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.457339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.270963  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.659915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.370953  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.601161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.470764  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.447931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.570952  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.537906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.576383  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod
I0513 18:40:30.576417  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod
I0513 18:40:30.576642  107570 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod", node "node1"
I0513 18:40:30.576670  107570 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0513 18:40:30.576729  107570 factory.go:711] Attempting to bind preemptor-pod to node1
I0513 18:40:30.576791  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:30.576830  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:30.576965  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.577034  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.579078  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod/binding: (2.01151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.579081  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (1.596971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54150]
I0513 18:40:30.579081  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (1.861099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54118]
I0513 18:40:30.579329  107570 scheduler.go:570] pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 18:40:30.579385  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-0.159e52226c3f148c: (1.570234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54152]
I0513 18:40:30.579526  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.579688  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:30.579706  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:30.579869  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.579921  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.582316  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (2.212179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.582365  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.57485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54150]
I0513 18:40:30.582608  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.582788  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:30.582804  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:30.582975  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.582991  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (2.601802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54154]
I0513 18:40:30.583012  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.584320  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (1.103571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.584331  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (1.048125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54154]
I0513 18:40:30.584981  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.585213  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:30.585235  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:30.585380  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.585422  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.586244  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-1.159e52226caa3f43: (3.258415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54150]
I0513 18:40:30.587098  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (1.388049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54154]
I0513 18:40:30.587159  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (1.567988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.587410  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.587554  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:30.587571  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:30.587664  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.587708  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.590052  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (998.926µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54154]
I0513 18:40:30.590143  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (1.158787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.590406  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-3.159e52226d5dcedc: (3.605594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54150]
I0513 18:40:30.590557  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.590746  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:30.590767  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:30.590877  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.590916  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.592007  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (950.633µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54154]
I0513 18:40:30.592249  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (1.021066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54156]
I0513 18:40:30.592296  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.592450  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:30.592464  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:30.592575  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.592606  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.593744  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-4.159e52226db82b4e: (2.736458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.595464  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (976.652µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54156]
I0513 18:40:30.596531  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (2.109815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54154]
I0513 18:40:30.596540  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.596668  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:30.596689  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:30.596787  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.596879  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.597528  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-5.159e52226e51fc0e: (2.575469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.598324  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (1.257332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54154]
I0513 18:40:30.598758  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.599163  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:30.599225  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:30.599352  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.599394  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.602274  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (5.217775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54156]
I0513 18:40:30.602744  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-7.159e52226f3309b4: (4.687638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.603170  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (1.821169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54158]
I0513 18:40:30.603287  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (3.2722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54154]
I0513 18:40:30.603645  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.603984  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:30.604004  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:30.605343  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.605457  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.607522  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-8.159e522270529029: (4.066822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.610045  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-9.159e522270a9bcd1: (1.788237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.610188  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (2.634632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54156]
I0513 18:40:30.610512  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (4.324898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54154]
I0513 18:40:30.610789  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.610952  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:30.610975  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:30.611081  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.611130  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.612336  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (982.933µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54156]
I0513 18:40:30.612645  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-10.159e52227117486c: (1.694303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.612677  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (1.025544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54164]
I0513 18:40:30.612792  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.612986  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:30.613013  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:30.613133  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.613184  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.617208  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (2.608914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54156]
I0513 18:40:30.617291  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (2.497556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54166]
I0513 18:40:30.617495  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.617668  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:30.617683  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:30.617767  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.617866  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.618404  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-2.159e52226cfd9853: (5.140326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.619169  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (972.695µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54156]
I0513 18:40:30.619313  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (1.162239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54166]
I0513 18:40:30.620525  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.620679  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:30.620734  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:30.620877  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.620917  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.621912  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-11.159e522271f0cceb: (2.829021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.622064  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (921.086µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54156]
I0513 18:40:30.622248  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (1.144117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54166]
I0513 18:40:30.622476  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.622641  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:30.622668  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:30.622768  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.622805  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.624044  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (952.286µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.624295  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.624421  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (1.149079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54168]
I0513 18:40:30.624490  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:30.624509  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:30.624538  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-12.159e522272442bf7: (2.105516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54156]
I0513 18:40:30.624684  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.625353  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.626859  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (1.162055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.626990  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (1.353899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54168]
I0513 18:40:30.628682  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.629810  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:30.629903  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:30.630278  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.630340  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.631006  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-13.159e5222728fac3e: (4.843566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54170]
I0513 18:40:30.639802  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (7.879637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.644852  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (13.37765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54168]
I0513 18:40:30.645892  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.649578  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:30.649602  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:30.649989  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.650059  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.654594  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-14.159e522273218b83: (16.953419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54170]
I0513 18:40:30.655224  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (4.462923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.656872  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (6.507501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54168]
I0513 18:40:30.657347  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.657772  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:30.657890  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:30.658305  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.658755  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.661568  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (2.507541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54170]
I0513 18:40:30.661890  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-16.159e522273e543bf: (4.600297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.661910  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.662151  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:30.662186  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:30.663827  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.663929  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.664152  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (3.833706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54172]
I0513 18:40:30.665725  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (1.374257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54170]
I0513 18:40:30.667182  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-17.159e5222746c6220: (2.538392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54172]
I0513 18:40:30.668230  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (4.048795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.668768  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.669114  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:30.669136  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:30.669339  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.669417  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.670046  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-18.159e522274c5eb00: (1.983872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54170]
I0513 18:40:30.672030  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (2.799244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.672067  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (2.345196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54174]
I0513 18:40:30.672319  107570 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0513 18:40:30.673906  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (2.168628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54176]
I0513 18:40:30.675512  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.675622  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (1.942309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54108]
I0513 18:40:30.675936  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-6.159e52226eaa933c: (3.998145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54170]
I0513 18:40:30.675989  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:30.676028  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:30.676130  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.676196  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.677835  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (1.353252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54174]
I0513 18:40:30.678267  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (1.891793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54176]
I0513 18:40:30.678261  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.679594  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (2.701332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54178]
I0513 18:40:30.679938  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-20.159e522275dea2a1: (2.834967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54180]
I0513 18:40:30.680841  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (2.032829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54176]
I0513 18:40:30.682089  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:30.683369  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:30.686116  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-21.159e52227625cbd9: (5.301483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54178]
I0513 18:40:30.686136  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:30.689332  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:30.689808  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-23.159e522276e2abcf: (3.013564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54178]
I0513 18:40:30.692207  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (10.87159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54176]
I0513 18:40:30.696290  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-24.159e52227733159b: (5.765509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54178]
I0513 18:40:30.697185  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:30.697316  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (3.444176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54176]
I0513 18:40:30.697505  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:30.697545  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:30.697793  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.697966  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.700174  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (1.79904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54174]
I0513 18:40:30.700354  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (2.417899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54178]
I0513 18:40:30.700478  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.701408  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-25.159e52227799f920: (2.178046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.703552  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (3.324706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54182]
I0513 18:40:30.703709  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (2.91098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54178]
I0513 18:40:30.705287  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (1.112574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.708285  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (1.191169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.709947  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:30.709988  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:30.710135  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.710205  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.710989  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (2.195111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.711698  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (1.196314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54174]
I0513 18:40:30.712031  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.712519  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (1.646963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54188]
I0513 18:40:30.712548  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:30.712564  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:30.712761  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.712812  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.713294  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (1.661815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54192]
I0513 18:40:30.714778  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (1.462418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54190]
I0513 18:40:30.715063  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (1.207191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54192]
I0513 18:40:30.715137  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (1.825025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54174]
I0513 18:40:30.715443  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-26.159e522277ddb34b: (2.86041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.715564  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.716757  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (1.178903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54192]
I0513 18:40:30.717354  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:30.717378  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:30.717511  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.717575  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.718520  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-27.159e52227822fa63: (2.253571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.719438  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (1.241058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54192]
I0513 18:40:30.719728  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (1.963347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54190]
I0513 18:40:30.720019  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.720375  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (2.345953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54198]
I0513 18:40:30.721296  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (1.135284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54190]
I0513 18:40:30.721924  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-29.159e522278bec0f7: (1.846307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54192]
I0513 18:40:30.721981  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:30.722017  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:30.722201  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.722263  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.723694  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (1.825936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54198]
I0513 18:40:30.723914  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (1.436298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.724213  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (1.39752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54192]
I0513 18:40:30.724456  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.724636  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:30.724674  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:30.724795  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.724891  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.726698  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-30.159e522279237f00: (3.576183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54200]
I0513 18:40:30.727861  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (2.632491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54198]
I0513 18:40:30.728235  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (3.915236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.728398  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (3.100141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54192]
I0513 18:40:30.728735  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.728937  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:30.728957  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:30.730575  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.730628  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.731478  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (2.716275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.732348  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (1.18342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54198]
I0513 18:40:30.732634  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.732801  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:30.732851  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:30.732967  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.733028  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.733109  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (1.887858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.735692  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (2.973643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.736028  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (1.734311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54204]
I0513 18:40:30.736147  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (1.939029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54198]
I0513 18:40:30.736955  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-31.159e5222798a7781: (9.379923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54200]
I0513 18:40:30.737324  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.737871  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:30.737896  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:30.738039  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.738107  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.738994  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (2.570863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.741276  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (2.639005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54198]
I0513 18:40:30.741281  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (1.723008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54206]
I0513 18:40:30.741312  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (2.415395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.741997  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.742172  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-32.159e522279de15f1: (2.88259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.742570  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:30.742590  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:30.742678  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.742722  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.742967  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (916.159µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54206]
I0513 18:40:30.744537  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (1.448976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.744564  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (1.393246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54208]
I0513 18:40:30.744543  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-33.159e52227a285385: (1.727967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54184]
I0513 18:40:30.744796  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.744942  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:30.744979  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:30.745053  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.745095  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.745276  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (1.82754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54206]
I0513 18:40:30.748240  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (2.454136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54206]
I0513 18:40:30.748272  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (2.701327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.748405  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-34.159e52227a908327: (2.833199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54208]
I0513 18:40:30.748507  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.748747  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:30.749289  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:30.749308  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (862.445µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54206]
I0513 18:40:30.749464  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.749504  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (849.433µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54210]
I0513 18:40:30.749508  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.752099  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (997.43µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54212]
I0513 18:40:30.752138  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (909.191µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54206]
I0513 18:40:30.752196  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (1.089891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54214]
I0513 18:40:30.752449  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.752567  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:30.752631  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:30.752754  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.752879  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.753561  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (1.046632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54212]
I0513 18:40:30.754116  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-35.159e52227adc67fd: (5.138576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.754626  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (1.530821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54210]
I0513 18:40:30.754787  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (1.424843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54216]
I0513 18:40:30.755048  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.755082  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (961.532µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54212]
I0513 18:40:30.755172  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:30.755193  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:30.755276  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.755315  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.756410  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (805.414µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54218]
I0513 18:40:30.756767  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-36.159e52227b323441: (1.894798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.756861  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (1.400366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54210]
I0513 18:40:30.757058  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.669843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54216]
I0513 18:40:30.759043  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.759236  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:30.759259  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:30.759368  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.759445  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.760136  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (1.231124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.760242  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-37.159e52227bd17abe: (1.342279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54210]
I0513 18:40:30.761096  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (1.104532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54220]
I0513 18:40:30.761320  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.761458  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (1.011955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54210]
I0513 18:40:30.761474  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (1.876956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54218]
I0513 18:40:30.761461  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:30.761536  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:30.761619  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.761668  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.762382  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-39.159e52227c84d577: (1.686307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.762769  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (977.1µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54220]
I0513 18:40:30.762860  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (1.068501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54210]
I0513 18:40:30.762996  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.763132  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:30.763154  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:30.763245  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.763287  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.764248  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (806.303µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.764259  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (2.185668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.764333  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-40.159e52227e4195f5: (1.34066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54220]
I0513 18:40:30.764335  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (1.076945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54210]
I0513 18:40:30.764486  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.764723  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:30.764744  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:30.764728  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (800.762µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54224]
I0513 18:40:30.764889  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.764930  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.767277  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-15.159e5222736e1eda: (2.210853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.767452  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (2.180988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54226]
I0513 18:40:30.767725  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (2.646105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54224]
I0513 18:40:30.768027  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.768190  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:30.768208  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:30.768277  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.768316  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.769083  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (3.969301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.769428  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-42.159e52227f670889: (1.356775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.770145  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (1.414034ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54228]
I0513 18:40:30.770160  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (1.686458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54226]
I0513 18:40:30.770404  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.770570  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:30.770591  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:30.770686  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.770717  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.771326  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-43.159e52227fa8a06c: (1.2974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.771670  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (797.489µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54226]
I0513 18:40:30.771770  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (2.191454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.771931  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (1.055178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54228]
I0513 18:40:30.771930  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.772172  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:30.772194  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:30.772269  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.772306  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.773614  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-45.159e52228024da87: (1.647596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54226]
I0513 18:40:30.773631  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (1.189521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.773616  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (1.289663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.773753  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (1.110061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54230]
I0513 18:40:30.773861  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.774103  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:30.774117  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:30.774246  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.774300  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.775715  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-46.159e522280735093: (1.514999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.775783  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (1.054228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54232]
I0513 18:40:30.775848  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (1.852122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.775861  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (1.235426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54234]
I0513 18:40:30.776045  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.776171  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:30.776193  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:30.776260  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.776297  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.777162  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (978.115µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.777597  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-47.159e522280c7f797: (1.331814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.777599  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (955.335µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54236]
I0513 18:40:30.777840  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.778321  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:30.778339  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:30.778357  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (924.77µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54238]
I0513 18:40:30.778415  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.778448  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.778452  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (867.446µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.779594  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (1.019867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54238]
I0513 18:40:30.779621  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-48.159e522281073080: (1.410798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.779661  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (929.227µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54236]
I0513 18:40:30.779807  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.779978  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:30.779993  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:30.780062  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.780098  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.781027  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (986.592µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.781363  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (2.719434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.781670  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.230711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.781723  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.307558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54240]
I0513 18:40:30.781946  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.782078  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:30.782097  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:30.782185  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.782226  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.782298  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-49.159e5222815ec2f9: (2.124558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54238]
I0513 18:40:30.782664  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (1.02603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.783349  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (868.623µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.783485  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (1.053519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.783722  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.783984  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:30.784010  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:30.784195  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (1.092773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54238]
I0513 18:40:30.784253  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.784329  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.785704  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (1.228049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.785933  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.785936  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (1.462299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.786055  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:30.786095  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:30.786232  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:30.786289  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:30.787331  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (2.056576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.787759  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (1.036247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.787798  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (1.064721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.788131  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-19.159e522275706b91: (5.060192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54202]
I0513 18:40:30.788323  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:30.788805  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (1.05725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.790297  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (1.09697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.790326  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-22.159e5222766924d7: (1.653853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54222]
I0513 18:40:30.791506  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (884.937µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.792661  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-28.159e5222787337ed: (1.889489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.792903  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (946.892µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.794233  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (985.284µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.794847  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-38.159e52227c368699: (1.579032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.795586  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (1.007923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.795865  107570 preemption_test.go:598] Cleaning up all pods...
I0513 18:40:30.796599  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-41.159e52227ed0a0d7: (1.230317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.798581  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-44.159e52227fe2aa19: (1.364813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.798609  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:30.798711  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:30.799693  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (3.559115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.800731  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.370372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.802106  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:30.802148  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:30.803508  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (3.486147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.803627  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.233278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.807019  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:30.807050  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:30.808248  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (4.402453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.808558  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.28498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.810995  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:30.811031  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:30.812128  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (3.602543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.812709  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.463302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.814833  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:30.814876  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:30.816253  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (3.594414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.816414  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.273826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.818787  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:30.818850  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:30.820124  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (3.540681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.820237  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.170926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.822562  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:30.822633  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:30.823780  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (3.340586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.824157  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.292038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.827871  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:30.827912  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:30.828790  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (4.722149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.829642  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.492051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.831711  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:30.831756  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:30.833696  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (4.521956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.833702  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.626076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.836381  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:30.836434  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:30.837559  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (3.498814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.838107  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.417673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.840281  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:30.840328  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:30.841467  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (3.439689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.842141  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.517713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.844207  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:30.844246  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:30.846098  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.578729ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.846245  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (4.350848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.850559  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:30.850610  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:30.851759  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (3.835799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.852364  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.426401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.854532  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:30.854566  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:30.855773  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (3.639508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.856233  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.41861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.858672  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:30.858717  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:30.859979  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (3.505784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.860321  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.386688ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.862563  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:30.862605  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:30.864702  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (4.470691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.865626  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.111156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.868283  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:30.868316  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:30.869452  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (4.405782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.870245  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.656194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.872692  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:30.872745  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:30.873890  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (3.862232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.874536  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.520978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.876908  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:30.876958  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:30.878030  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (3.812455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.878939  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.588757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.880992  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:30.881045  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:30.882288  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (3.791653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.882795  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.516615ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.885850  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:30.885887  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:30.888670  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (5.933122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.888810  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.179865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.891768  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:30.891876  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:30.893008  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (3.859194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.893909  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.715644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.896116  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:30.896149  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:30.898168  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.777241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.898260  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (4.888639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.901142  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:30.901236  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:30.902640  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (4.040264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.903353  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.800594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.906630  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:30.906805  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:30.908199  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (4.637112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.908534  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.319793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.911965  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:30.912035  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:30.913797  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (5.27144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.914119  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.824475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.917228  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:30.917282  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:30.918289  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (4.019413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.919185  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.700481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.921231  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:30.921275  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:30.922541  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (3.897221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.922953  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.42184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.925438  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:30.925479  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:30.926775  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (3.875173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.927620  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.689519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.929717  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:30.929757  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:30.931017  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (3.701169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.931589  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.520426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.933545  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:30.933621  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:30.935474  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.508938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.935481  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (4.179197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.938357  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:30.938424  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:30.939522  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (3.729631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.940126  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.392454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.942415  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:30.942460  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:30.943627  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (3.598391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.944058  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.305203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.946847  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:30.947059  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:30.948476  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (4.250132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.949116  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.549021ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.951716  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:30.951880  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:30.953153  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (4.29498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.954137  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.904245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.956436  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:30.956486  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:30.957665  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (4.065764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.958696  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.923792ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.960662  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:30.960739  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:30.962403  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (4.337863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.962500  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.437179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.965391  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:30.965438  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:30.968495  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.801974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.968477  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (5.762396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.972753  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:30.972830  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:30.974716  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.571553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.975262  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (5.290234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.978310  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:30.978355  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:30.979460  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (3.861751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.980249  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.633827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.982409  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:30.982455  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:30.983699  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (3.923812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.984513  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.797027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.987833  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:30.987920  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:30.989695  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.504398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.990341  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (6.267963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.993185  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:30.993265  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:30.994227  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (3.44595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.995102  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.541131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:30.997072  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:30.997118  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:30.998667  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (4.057171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:30.999027  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.658968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:31.001393  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:31.001438  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:31.002615  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (3.626734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.003502  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.755777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:31.005795  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:31.005869  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:31.007863  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.743739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:31.009191  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (6.197636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.011897  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:31.011949  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:31.013166  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (3.663204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.013535  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.355779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:31.016320  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:31.016356  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:31.017350  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (3.778627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.017786  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.193524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:31.020364  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:31.020410  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:31.021301  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (3.684182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.022103  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.456716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:31.023948  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:31.023986  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:31.025084  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (3.332691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.025849  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.562413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:31.026997  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0: (962.414µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.030933  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-1: (3.568005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.034912  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (3.676134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.037261  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (855.903µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.039621  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (819.354µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.042234  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (1.154082ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.044641  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (967.808µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.047889  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (1.625695ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.050267  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (876.109µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.052765  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (819.559µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.055301  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (986.322µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.057661  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (906.495µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.060116  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (879.117µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.062391  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (820.946µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.065840  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (1.771151ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.068387  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (860.74µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.070847  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (930.082µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.073241  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (827.989µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.075614  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (775.667µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.078054  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (973.222µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.080401  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (873.588µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.082954  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (884.942µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.088406  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (3.862272ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.090863  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (914.472µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.093340  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (967.407µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.095680  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (868.025µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.098349  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (916.867µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.100635  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (717.556µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.102920  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (792.734µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.105222  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (829.859µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.109709  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (2.028411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.112528  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.306474ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.115322  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (871.006µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.117810  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (860.807µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.120390  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (960.109µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.122942  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (977.629µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.125929  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (1.122299ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.128845  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (1.375468ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.131548  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (1.119404ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.134345  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (1.208328ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.136754  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (938.659µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.139278  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (1.036833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.141756  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (935.431µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.144392  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (972.695µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.147409  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (1.450342ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.149961  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (1.007091ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.152622  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (1.219637ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.155077  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (873.613µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.157514  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (906.453µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.159981  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (886.952µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.162400  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (852.121µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.164978  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (1.057097ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.168296  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (1.017293ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.170771  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0: (963.435µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.173181  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-1: (913.999µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.175592  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (916.427µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.177760  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.714609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.178184  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0
I0513 18:40:31.178209  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0
I0513 18:40:31.178323  107570 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0", node "node1"
I0513 18:40:31.178373  107570 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0", node "node1": all PVCs bound and nothing to do
I0513 18:40:31.178429  107570 factory.go:711] Attempting to bind rpod-0 to node1
I0513 18:40:31.179997  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.804995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.180403  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0/binding: (1.747477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:31.180658  107570 scheduler.go:570] pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 18:40:31.181587  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1
I0513 18:40:31.181658  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1
I0513 18:40:31.181794  107570 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1", node "node1"
I0513 18:40:31.181829  107570 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1", node "node1": all PVCs bound and nothing to do
I0513 18:40:31.181899  107570 factory.go:711] Attempting to bind rpod-1 to node1
I0513 18:40:31.182494  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.588304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:31.184012  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-1/binding: (1.817141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.184233  107570 scheduler.go:570] pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 18:40:31.186835  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.237164ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.282373  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0: (1.665882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.385622  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-1: (2.512905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.386003  107570 preemption_test.go:561] Creating the preemptor pod...
I0513 18:40:31.388401  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.166344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.388624  107570 preemption_test.go:567] Creating additional pods...
I0513 18:40:31.389076  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod
I0513 18:40:31.389129  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod
I0513 18:40:31.389245  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.389312  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.391154  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.283732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.392183  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod/status: (2.576241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:31.392694  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (2.70369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54252]
E0513 18:40:31.392967  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.393796  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.57675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54254]
I0513 18:40:31.394259  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.576538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54242]
I0513 18:40:31.394672  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.974484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54244]
I0513 18:40:31.394957  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0513 18:40:31.395056  107570 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0513 18:40:31.395070  107570 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0513 18:40:31.396749  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.756527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54254]
I0513 18:40:31.397360  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod/status: (2.043721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54252]
I0513 18:40:31.398659  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.479368ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54254]
I0513 18:40:31.400681  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.525682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54254]
I0513 18:40:31.401950  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0: (4.193407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54252]
I0513 18:40:31.402216  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod
I0513 18:40:31.402240  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod
I0513 18:40:31.402369  107570 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod", node "node1"
I0513 18:40:31.402387  107570 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0513 18:40:31.402439  107570 factory.go:711] Attempting to bind preemptor-pod to node1
I0513 18:40:31.402478  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:31.402497  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:31.402601  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.402681  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.403167  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.079751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54254]
I0513 18:40:31.403773  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.395567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54252]
I0513 18:40:31.404528  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (1.205697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54260]
I0513 18:40:31.405008  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod/binding: (1.944958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.405134  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0/status: (1.743109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54258]
I0513 18:40:31.405176  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.486786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54254]
I0513 18:40:31.405181  107570 scheduler.go:570] pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 18:40:31.406991  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.120836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54252]
I0513 18:40:31.407580  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (2.036618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.407927  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.408125  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.575452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54260]
I0513 18:40:31.408262  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:31.408276  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:31.408385  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.408444  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.409761  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.274077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54260]
I0513 18:40:31.409891  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.544529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54252]
I0513 18:40:31.410064  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (1.484526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.411604  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1/status: (2.668518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54262]
I0513 18:40:31.412112  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.756551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54260]
I0513 18:40:31.412236  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.993693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54252]
I0513 18:40:31.414254  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (1.991324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54262]
I0513 18:40:31.414496  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.414731  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:31.414751  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:31.414902  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.414951  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.415492  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.518869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54252]
I0513 18:40:31.416906  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2/status: (1.627883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54262]
I0513 18:40:31.416907  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (891.646µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54252]
I0513 18:40:31.417715  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.452475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.418210  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.864645ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54264]
I0513 18:40:31.418962  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (1.020918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54252]
I0513 18:40:31.419190  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.419354  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:31.419370  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:31.419466  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.419525  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.419960  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.375503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.421717  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.563038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54266]
I0513 18:40:31.421753  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (1.983815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54262]
I0513 18:40:31.421997  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3/status: (2.200182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54252]
I0513 18:40:31.422178  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.871337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.423431  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (1.079566ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54262]
I0513 18:40:31.423655  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.423803  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:31.423870  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:31.423995  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.424030  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.422715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.424039  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.425971  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.569569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.426557  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (2.26749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54266]
I0513 18:40:31.427167  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.365357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54268]
I0513 18:40:31.427858  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4/status: (3.530629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54262]
I0513 18:40:31.429558  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.697982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.429807  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (1.009648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54262]
I0513 18:40:31.430061  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.430244  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:31.430269  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:31.430365  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.430411  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.431324  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.324767ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.432808  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.817606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54274]
I0513 18:40:31.432858  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (1.912376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54272]
I0513 18:40:31.432904  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5/status: (2.257632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54270]
E0513 18:40:31.433521  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.433766  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.848713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.434739  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (1.023766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54272]
I0513 18:40:31.435026  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.435221  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:31.435246  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:31.435375  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.435438  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.435907  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.727698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.437960  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6/status: (2.280133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54272]
I0513 18:40:31.438295  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.907459ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.438448  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.31862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54276]
I0513 18:40:31.438595  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (2.867931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54274]
E0513 18:40:31.438886  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.439496  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (1.222919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54272]
I0513 18:40:31.439769  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.439967  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:31.439993  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:31.440099  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.396265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54276]
I0513 18:40:31.440102  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.440144  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.441544  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (870.411µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54278]
I0513 18:40:31.441712  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.312061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54274]
I0513 18:40:31.442294  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.650753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54256]
I0513 18:40:31.442422  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7/status: (1.680633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54280]
I0513 18:40:31.444072  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (1.181381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54278]
I0513 18:40:31.444195  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.473739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54274]
I0513 18:40:31.444639  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.444913  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:31.444960  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:31.445079  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.445160  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.446397  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.763176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54274]
I0513 18:40:31.447075  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (1.071931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54278]
I0513 18:40:31.449288  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.938429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54274]
I0513 18:40:31.449394  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8/status: (1.513956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54278]
I0513 18:40:31.449423  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.924988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54284]
I0513 18:40:31.450862  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (873.101µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54282]
I0513 18:40:31.451111  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.451281  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:31.451300  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:31.451391  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.451440  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.452120  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.001988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54286]
I0513 18:40:31.452579  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (960.864µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54282]
I0513 18:40:31.453377  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9/status: (1.531096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54288]
I0513 18:40:31.453570  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.576837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54290]
I0513 18:40:31.454476  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (794.08µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54288]
I0513 18:40:31.454745  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.454962  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:31.454977  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:31.455081  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.455115  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.456779  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.18022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54292]
I0513 18:40:31.456903  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (1.566728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54282]
I0513 18:40:31.457026  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10/status: (1.687248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54290]
I0513 18:40:31.457441  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (4.875799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54286]
E0513 18:40:31.457580  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.458333  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (900.232µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54282]
I0513 18:40:31.458538  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.458679  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:31.458699  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:31.458806  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.458881  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.459101  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.306941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54286]
I0513 18:40:31.460624  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.23386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54294]
I0513 18:40:31.460728  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (1.42102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54292]
I0513 18:40:31.460980  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11/status: (1.643392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54282]
I0513 18:40:31.461211  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.636491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54286]
I0513 18:40:31.462250  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (908.416µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54292]
I0513 18:40:31.462872  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.463009  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.382616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54286]
I0513 18:40:31.463042  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:31.463135  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:31.463225  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.463277  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.465934  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12/status: (2.421848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54292]
I0513 18:40:31.466053  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.348209ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54298]
I0513 18:40:31.466207  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.744267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54294]
I0513 18:40:31.466098  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (2.434955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54296]
I0513 18:40:31.467772  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (1.252245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54298]
I0513 18:40:31.468008  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.468151  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:31.468166  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:31.468271  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.620015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54296]
I0513 18:40:31.468277  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.468339  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.469899  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.263152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54292]
I0513 18:40:31.470120  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (1.041053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54302]
I0513 18:40:31.470636  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.215684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54300]
I0513 18:40:31.471714  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.241887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54292]
I0513 18:40:31.472201  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13/status: (1.698627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54298]
I0513 18:40:31.473494  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (946.505µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54298]
I0513 18:40:31.473581  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.403983ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54300]
I0513 18:40:31.473755  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.474011  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:31.474032  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:31.474112  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.474157  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.475259  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.166873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54298]
I0513 18:40:31.475500  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (916.386µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54306]
I0513 18:40:31.476162  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14/status: (1.80107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54302]
I0513 18:40:31.476446  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.332973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54308]
I0513 18:40:31.477002  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.310011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54298]
I0513 18:40:31.477552  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (896.837µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54302]
I0513 18:40:31.477790  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.477980  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:31.477999  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:31.478081  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.478121  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.478749  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.363233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54298]
I0513 18:40:31.479683  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (1.359221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54306]
I0513 18:40:31.479931  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.316251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54310]
I0513 18:40:31.479964  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15/status: (1.649017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54302]
I0513 18:40:31.480734  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.270886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54298]
I0513 18:40:31.481788  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (972.771µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54310]
I0513 18:40:31.482028  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.482175  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:31.482224  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:31.482304  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.162074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54298]
I0513 18:40:31.482321  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.482358  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.484234  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.295922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54314]
I0513 18:40:31.484593  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (2.03793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54306]
I0513 18:40:31.484381  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.513177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54312]
I0513 18:40:31.484622  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16/status: (2.043389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54310]
E0513 18:40:31.486222  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.488268  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (3.575192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54314]
I0513 18:40:31.489522  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (1.614963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54310]
I0513 18:40:31.489740  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.489907  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:31.489928  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:31.490015  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.490082  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.490219  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.508445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54314]
I0513 18:40:31.492434  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.901869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54306]
I0513 18:40:31.492535  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17/status: (2.210957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54310]
I0513 18:40:31.492572  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.136884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54314]
I0513 18:40:31.493514  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (954.171µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54316]
I0513 18:40:31.493779  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (854.198µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54306]
E0513 18:40:31.493796  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.493963  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.17698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54310]
I0513 18:40:31.493987  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.494120  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:31.494141  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:31.494227  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.494277  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.495404  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (1.005175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54316]
I0513 18:40:31.495636  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.495778  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:31.495799  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:31.495891  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.495930  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.495993  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.699395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54306]
I0513 18:40:31.496210  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (1.251667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.496623  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-5.159e52230229d580: (1.657503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54320]
I0513 18:40:31.497426  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (1.193879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54306]
I0513 18:40:31.497472  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18/status: (1.354518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54316]
I0513 18:40:31.498185  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.536494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54322]
I0513 18:40:31.499131  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (940.601µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54316]
I0513 18:40:31.499452  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.22677ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54320]
I0513 18:40:31.499563  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.499895  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:31.499915  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:31.499945  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.372828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54322]
I0513 18:40:31.500007  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.500050  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.502009  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.38022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54324]
I0513 18:40:31.502025  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (1.151521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.502202  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19/status: (1.973532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54316]
I0513 18:40:31.502415  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.777328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54326]
I0513 18:40:31.503944  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (892.059µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.504196  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.504353  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:31.504372  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:31.504460  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.504498  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.506091  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (1.141325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54324]
I0513 18:40:31.508452  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20/status: (3.719555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.509716  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (855.764µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.509972  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.549044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54324]
I0513 18:40:31.510115  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.510268  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:31.510284  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:31.510368  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.510413  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.511516  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (927.292µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.511789  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.511863  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (1.190471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54328]
I0513 18:40:31.512016  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:31.512033  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:31.512132  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.512177  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.512570  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-6.159e5223027682ce: (1.489136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54330]
I0513 18:40:31.513294  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (871.577µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.513911  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21/status: (1.500213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54328]
I0513 18:40:31.514338  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.289748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54330]
I0513 18:40:31.515101  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (872.52µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54328]
I0513 18:40:31.515383  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.515507  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:31.515523  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:31.515597  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.515638  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.516854  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (945.985µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.517505  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.257833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54332]
I0513 18:40:31.519892  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22/status: (4.000159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54330]
I0513 18:40:31.521387  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (988.955µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54332]
I0513 18:40:31.521718  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.521922  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:31.521942  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:31.522035  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.522083  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.523479  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (1.110655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.523716  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23/status: (1.43489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54332]
I0513 18:40:31.524087  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.469465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54334]
I0513 18:40:31.525141  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (1.037414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54332]
I0513 18:40:31.525462  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.525633  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:31.525659  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:31.525741  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.525778  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.529047  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.82335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.529199  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (1.927579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54336]
I0513 18:40:31.529206  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24/status: (3.11699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54334]
E0513 18:40:31.529460  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.530709  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (981.389µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54336]
I0513 18:40:31.530964  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.531113  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:31.531135  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:31.531222  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.531264  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.533220  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.298197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54338]
I0513 18:40:31.533227  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (1.423877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.533361  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25/status: (1.87028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54336]
E0513 18:40:31.533582  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.534753  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (957.28µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.534997  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.535158  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:31.535174  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:31.535261  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.535304  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.536553  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (1.004026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54338]
I0513 18:40:31.537052  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26/status: (1.494671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.537596  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.83999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54340]
I0513 18:40:31.538392  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (1.020322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54318]
I0513 18:40:31.538717  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.539010  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:31.539031  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:31.539115  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.539160  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.540311  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (983.927µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54340]
I0513 18:40:31.540439  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (1.045137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54338]
I0513 18:40:31.540571  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.540708  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:31.540724  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:31.540812  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.540877  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.541628  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-10.159e522303a2cd1c: (1.852673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54342]
I0513 18:40:31.542300  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (996.332µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54340]
I0513 18:40:31.542729  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27/status: (1.417786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54338]
I0513 18:40:31.543512  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.170588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54342]
I0513 18:40:31.544251  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (1.126826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54338]
I0513 18:40:31.544539  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.544804  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:31.544855  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:31.544968  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.545008  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.547283  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.714849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54344]
I0513 18:40:31.547442  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (2.069792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54340]
I0513 18:40:31.547801  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28/status: (2.566897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54342]
I0513 18:40:31.549307  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.052193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54342]
I0513 18:40:31.549596  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.549773  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:31.549790  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:31.549880  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.549930  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.552095  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29/status: (1.93363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54342]
I0513 18:40:31.552192  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.66864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54346]
I0513 18:40:31.552236  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (2.075776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54340]
I0513 18:40:31.553590  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (1.046778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54346]
I0513 18:40:31.553783  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.554006  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:31.554029  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:31.554143  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.554190  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.555412  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (1.004304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54342]
I0513 18:40:31.556191  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.337276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54348]
I0513 18:40:31.556213  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30/status: (1.730816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54346]
I0513 18:40:31.557667  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (1.032459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54348]
I0513 18:40:31.557988  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.558149  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:31.558166  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:31.558254  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.558299  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.559783  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (1.245494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54342]
I0513 18:40:31.560466  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.614181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54350]
I0513 18:40:31.560525  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31/status: (1.982415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54348]
I0513 18:40:31.562099  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (1.099046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54350]
I0513 18:40:31.562353  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.562498  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:31.562516  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:31.562597  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.562631  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.564006  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (1.114769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54342]
I0513 18:40:31.564531  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.388938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54352]
I0513 18:40:31.564570  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32/status: (1.681369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54350]
I0513 18:40:31.566214  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (1.250029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54352]
I0513 18:40:31.566428  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.566579  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:31.566612  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:31.566727  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.566770  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.568762  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.356835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54354]
I0513 18:40:31.568880  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33/status: (1.88742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54352]
I0513 18:40:31.569398  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (2.241536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54342]
E0513 18:40:31.569674  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.570245  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (929.512µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54352]
I0513 18:40:31.570509  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.570711  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:31.570729  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:31.570905  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.570954  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.572389  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (1.160611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54354]
I0513 18:40:31.572972  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.376884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54356]
I0513 18:40:31.573365  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34/status: (2.166811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54342]
I0513 18:40:31.574889  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (1.041355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54356]
I0513 18:40:31.575109  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.575314  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:31.575331  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:31.575405  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.575478  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.576891  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (1.120003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54354]
I0513 18:40:31.577692  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35/status: (1.92552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54356]
I0513 18:40:31.577772  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.678662ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54358]
I0513 18:40:31.579153  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (1.102689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54356]
I0513 18:40:31.579385  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.579543  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:31.579563  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:31.579675  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.579725  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.580956  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (938.774µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54354]
I0513 18:40:31.581994  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.689664ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54360]
I0513 18:40:31.582108  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36/status: (2.123177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54356]
I0513 18:40:31.583507  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (969.255µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54360]
I0513 18:40:31.583786  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.584030  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:31.584048  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:31.584131  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.584179  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.587506  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37/status: (3.10314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54360]
I0513 18:40:31.587246  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (2.763227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54354]
E0513 18:40:31.587921  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.589166  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (1.039327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54354]
I0513 18:40:31.589536  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.589737  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:31.589805  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:31.589956  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.590000  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.590024  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.997658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54362]
I0513 18:40:31.591251  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (927.12µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54360]
I0513 18:40:31.591662  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.115034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54362]
I0513 18:40:31.591874  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38/status: (1.631377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54354]
I0513 18:40:31.593318  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (1.030122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54362]
I0513 18:40:31.593564  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.593757  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:31.593778  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:31.593905  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.593984  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.595407  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (1.137869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54360]
I0513 18:40:31.595995  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.465461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54364]
I0513 18:40:31.596017  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39/status: (1.794199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54362]
I0513 18:40:31.597273  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (942.333µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54362]
I0513 18:40:31.597729  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.597915  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:31.597934  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:31.598018  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.598068  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.599499  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (1.174806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54360]
I0513 18:40:31.600044  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40/status: (1.745596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54362]
I0513 18:40:31.600045  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.35596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54366]
I0513 18:40:31.601496  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (1.029437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54362]
I0513 18:40:31.601767  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.601957  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:31.601980  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:31.602092  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.602133  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.603222  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (885.251µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54360]
I0513 18:40:31.603665  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (1.352488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54362]
I0513 18:40:31.604000  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.604151  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:31.604257  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:31.604361  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.604406  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.604718  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.467806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54370]
I0513 18:40:31.606368  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41/status: (1.600391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54362]
I0513 18:40:31.606872  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (1.955643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54360]
E0513 18:40:31.607106  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.607610  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (933.595µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54362]
I0513 18:40:31.608037  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.608161  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-16.159e52230542806f: (5.471485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54368]
I0513 18:40:31.608167  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:31.608196  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:31.608295  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.608332  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.610104  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (1.476243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54370]
I0513 18:40:31.610261  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.471616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54372]
I0513 18:40:31.610694  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42/status: (2.155217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54360]
I0513 18:40:31.612756  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.78203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54372]
I0513 18:40:31.613044  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (1.76187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54360]
I0513 18:40:31.613298  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.613450  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:31.613468  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:31.613554  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.613595  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.615745  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (1.921206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54370]
I0513 18:40:31.615768  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43/status: (1.952528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54372]
I0513 18:40:31.615954  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.679695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54374]
I0513 18:40:31.617268  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (1.064251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54372]
I0513 18:40:31.617572  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.617757  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:31.617775  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:31.617892  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.617945  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.619765  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.48713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54370]
I0513 18:40:31.620738  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44/status: (2.571822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54374]
I0513 18:40:31.620921  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (2.279745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54376]
I0513 18:40:31.622160  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (1.024317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54374]
I0513 18:40:31.622419  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.622552  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:31.622566  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:31.622642  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.622698  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.624708  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (1.674752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54370]
I0513 18:40:31.624992  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (2.08352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54376]
I0513 18:40:31.625270  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-17.159e522305b8550e: (1.5551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54378]
I0513 18:40:31.625686  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.625934  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:31.625986  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:31.626109  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.626185  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.629492  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45/status: (3.035765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54376]
I0513 18:40:31.629972  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.083538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54380]
I0513 18:40:31.630873  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (4.393721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54370]
I0513 18:40:31.631148  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (993.514µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54376]
E0513 18:40:31.631163  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.631380  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.631529  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:31.631550  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:31.631633  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.631690  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.633531  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (1.212396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54380]
I0513 18:40:31.633631  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46/status: (1.695953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54370]
E0513 18:40:31.633867  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.634441  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.205245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54382]
I0513 18:40:31.635155  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (918.332µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54370]
I0513 18:40:31.635449  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.635611  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:31.635628  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:31.635722  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.635766  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.636977  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (972.943µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54382]
I0513 18:40:31.637420  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47/status: (1.392614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54380]
I0513 18:40:31.637621  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.265327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.638775  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (961.548µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54380]
I0513 18:40:31.639057  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.639170  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:31.639186  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:31.639243  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.639278  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.641176  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48/status: (1.684954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.641256  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (1.381466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54382]
I0513 18:40:31.641256  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.326101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54386]
E0513 18:40:31.641430  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.642603  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (978.998µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54382]
I0513 18:40:31.642899  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.643058  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:31.643075  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:31.643178  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.643222  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.644453  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (968.241µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54382]
I0513 18:40:31.645254  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49/status: (1.734413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.645292  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.544428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54388]
E0513 18:40:31.645716  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:31.646853  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (1.201707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.647091  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.648043  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:31.648070  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:31.648142  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.648178  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.649854  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (1.108653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.650118  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.650270  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:31.650286  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:31.650378  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.650425  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.652153  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (1.562203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.652243  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (1.28143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.652453  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.652592  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:31.652607  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:31.652712  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.652756  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.655428  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (2.374361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.655980  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (2.998541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.656365  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.656731  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (1.139678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54382]
I0513 18:40:31.656740  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:31.656760  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:31.656875  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.656925  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.657986  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-24.159e522307d90660: (1.786486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.660906  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-25.159e5223082cbcfc: (2.232259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.661350  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (4.207826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54382]
I0513 18:40:31.661697  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (4.551541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.662065  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.662207  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:31.662219  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:31.662309  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.662345  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.663359  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-33.159e52230a4a862a: (1.536442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.664310  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (1.641572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.664635  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (1.957888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54382]
I0513 18:40:31.668093  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.668392  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:31.668410  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:31.668512  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.668548  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.679523  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (10.104548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54382]
I0513 18:40:31.680671  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (11.786741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.681146  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-37.159e52230b542656: (16.850852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.681706  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.681913  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:31.681936  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:31.682154  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.682255  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.684057  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:31.684340  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-41.159e52230c88ce6e: (2.406423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.684516  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:31.686396  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:31.687527  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (1.562756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.687706  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (1.778988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54382]
I0513 18:40:31.687790  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.688219  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:31.688233  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:31.688328  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.688368  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.690181  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:31.690719  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-45.159e52230dd51858: (4.030262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.690879  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (2.238135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.691017  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (2.390454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54382]
I0513 18:40:31.691294  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.691441  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:31.691606  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:31.691717  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:31.691758  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:31.693064  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (1.017715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.693430  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (1.077322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.693718  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:31.693725  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-46.159e52230e291867: (2.195577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54384]
I0513 18:40:31.696113  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-48.159e52230e9ceb70: (1.697851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.697383  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:31.699511  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-49.159e52230ed9118d: (2.791568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.707670  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.617204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.707999  107570 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0513 18:40:31.710275  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (2.074779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.714021  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (3.447015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.715851  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (1.37382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.717566  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (1.253442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.719201  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (1.205203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.721244  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (1.620048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.722707  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (1.062387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.724113  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (1.048092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.725585  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (1.037534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.727744  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (1.679531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.729212  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (1.001853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.730606  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (1.051214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.732083  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (1.052012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.733531  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (1.078184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.734882  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (937.906µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.736351  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (1.038406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.737751  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (977.917µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.739108  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (929.162µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.740473  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (1.004587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.741930  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (1.024544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.743263  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (966.313µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.744723  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (1.003319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.746183  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (977.306µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.747537  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (927.42µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.748993  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (984.98µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.750454  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (1.01288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.751942  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (1.066756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.753318  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (939.723µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.754800  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.075681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.756101  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (903.205µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.757503  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (1.046914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.758894  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (970.971µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.760291  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (1.04134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.761591  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (1.004812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.762992  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (968.099µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.764388  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (1.078813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.768175  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (2.471052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.770132  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (1.376758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.771591  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (1.068093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.772937  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (955.663µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.779968  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (6.700064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.781860  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (1.363649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.783678  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (1.136389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.785245  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (1.074867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.786961  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (975.668µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.789354  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (1.231088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.792504  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (2.822591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.794195  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (1.20833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.795797  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (1.187778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.797320  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (993.185µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.797745  107570 preemption_test.go:598] Cleaning up all pods...
I0513 18:40:31.801112  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:31.801158  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:31.803783  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (5.783422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.803942  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.503736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.807152  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:31.807186  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:31.807944  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (3.58364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.808699  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.240166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.810712  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:31.810751  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:31.812295  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (3.977619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.812587  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.567432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.815376  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:31.815415  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:31.816715  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (3.72336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.816999  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.330374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.819366  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:31.819409  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:31.820612  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (3.632077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.821673  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.447623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.823397  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:31.823436  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:31.824596  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (3.60243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.825033  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.392264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.828102  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:31.828136  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:31.829475  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (4.30275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.829709  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.262779ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.832090  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:31.832169  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:31.833531  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (3.758456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.834022  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.414108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.836509  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:31.836551  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:31.837582  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (3.662359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.838469  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.705985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.840672  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:31.840709  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:31.842348  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.368858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.842477  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (4.593605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.845120  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:31.845216  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:31.846755  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.161199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.846866  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (4.099921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.849511  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:31.849551  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:31.850745  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (3.578531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.851025  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.248004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.853347  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:31.853387  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:31.854437  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (3.34921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.854770  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.190067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.857026  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:31.857063  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:31.858239  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (3.418344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.859186  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.91753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.860695  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:31.860732  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:31.861892  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (3.339224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.862259  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.30153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.864459  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:31.864496  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:31.865581  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (3.39586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.866348  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.58928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.868424  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:31.868457  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:31.869639  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (3.64295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.869902  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.252096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.872221  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:31.872259  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:31.873577  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (3.635971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.874060  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.570049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.876435  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:31.876467  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:31.877440  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (3.407373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.878157  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.467882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.881429  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:31.881468  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:31.882631  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (4.806631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.884530  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.814971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.887256  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:31.887294  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:31.890750  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.151732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.891877  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (8.599961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.895113  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:31.895163  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:31.896590  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (4.353299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.899851  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.911004ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.901012  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:31.901058  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:31.902408  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (5.305349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.906097  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:31.906139  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:31.907155  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.987288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.908625  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (5.93079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.910224  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.377483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.911472  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:31.911518  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:31.912707  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (3.611966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.914492  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.63399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.916282  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:31.916318  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:31.917811  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.272681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.918148  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (5.115372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.921012  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:31.921089  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:31.922787  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.416825ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.922852  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (4.408011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.925852  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:31.925890  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:31.927084  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (3.935244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.928105  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.792437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.930125  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:31.930166  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:31.931250  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (3.776149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.932269  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.388199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.934129  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:31.934173  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:31.935536  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (3.929581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.937554  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.177554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.938872  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:31.938898  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:31.940675  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.347268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.941169  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (4.894992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.943991  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:31.944026  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:31.945113  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (3.579569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.945600  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.320038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.947739  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:31.947807  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:31.949031  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (3.58142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.949529  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.33543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.951851  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:31.951886  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:31.952968  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (3.422605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.953589  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.430201ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.955774  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:31.955812  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:31.957471  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (4.092705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.957479  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.416059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.960220  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:31.960260  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:31.961510  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (3.747075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.961852  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.331016ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.964070  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:31.964107  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:31.965307  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (3.506409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.965670  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.368896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.968377  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:31.968418  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:31.969350  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (3.443205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.969924  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.267702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.972343  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:31.972382  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:31.974010  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (4.016029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.974167  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.54623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.976981  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:31.977016  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:31.978009  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (3.557257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.979697  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.431155ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.980947  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:31.980987  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:31.981894  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (3.578236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.982316  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.006683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.984616  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:31.984667  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:31.986134  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.142254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.986976  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (4.804465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.994801  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:31.995588  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:31.996694  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (9.37549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:31.997215  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.318845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:31.999696  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:31.999739  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:32.001042  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (3.993765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:32.001425  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.403569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.004254  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:32.004296  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:32.005908  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (3.898284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.007677  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.046449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:32.008999  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:32.009035  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:32.010077  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (3.695764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.012808  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.5546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:32.013073  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:32.013098  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:32.016155  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.747623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:32.016172  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (5.771693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.018729  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:32.018765  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:32.019943  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (3.440678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.020189  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.201443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:32.022974  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:32.023012  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:32.024105  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (3.632213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.024908  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.659839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:32.026468  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:32.026509  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:32.027997  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.191748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:32.028069  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (3.649809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.029267  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0: (930.797µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.033113  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-1: (3.547439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.037030  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (3.601153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.039233  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (801.317µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.041784  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (965.513µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.044191  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (891.141µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.046524  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (786.9µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.048977  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (892.209µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.051394  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (925.751µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.054144  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (1.183196ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.056608  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (931.086µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.059105  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (966.548µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.061508  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (865.032µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.063908  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (890.743µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.066386  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (953.857µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.068956  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (883.848µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.071499  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (1.084253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.073790  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (787.14µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.076189  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (858.243µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.078566  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (912.821µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.080911  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (838.019µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.083320  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (834.998µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.085619  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (858.884µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.088152  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (999.492µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.090560  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (902.675µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.095303  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (981.734µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.097689  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (886.749µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.100184  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (930.525µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.102540  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (861.731µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.104872  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (823.762µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.107374  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (958.853µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.109951  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.131785ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.113047  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (1.617711ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.115917  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (1.167139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.119474  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (1.359268ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.122251  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (1.173002ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.124642  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (880.6µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.127088  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (882.233µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.129440  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (878.566µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.132086  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (969.768µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.138548  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (4.896257ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.141012  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (981.547µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.143442  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (967.67µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.146063  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (1.036847ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.148524  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (887.323µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.151030  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (1.032992ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.153464  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (1.004757ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.156042  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (1.056416ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.158512  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (981.165µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.161861  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (1.092488ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.165149  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (1.801402ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.167572  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (915.779µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.171262  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (1.83617ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.173803  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0: (1.00037ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.176292  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-1: (982.26µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.178784  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.0182ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.180985  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.703975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.181447  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0
I0513 18:40:32.181468  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0
I0513 18:40:32.181598  107570 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0", node "node1"
I0513 18:40:32.181617  107570 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0", node "node1": all PVCs bound and nothing to do
I0513 18:40:32.181678  107570 factory.go:711] Attempting to bind rpod-0 to node1
I0513 18:40:32.183620  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0/binding: (1.646052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.183773  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.459351ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:32.183799  107570 scheduler.go:570] pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 18:40:32.184049  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1
I0513 18:40:32.184072  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1
I0513 18:40:32.184198  107570 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1", node "node1"
I0513 18:40:32.184216  107570 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1", node "node1": all PVCs bound and nothing to do
I0513 18:40:32.184262  107570 factory.go:711] Attempting to bind rpod-1 to node1
I0513 18:40:32.186023  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.945994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:32.186095  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-1/binding: (1.592925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.186234  107570 scheduler.go:570] pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 18:40:32.187804  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.38439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.286430  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0: (1.821694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.389946  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-1: (2.54882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.390477  107570 preemption_test.go:561] Creating the preemptor pod...
I0513 18:40:32.394325  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod
I0513 18:40:32.394389  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod
I0513 18:40:32.394512  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.394628  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.395847  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (4.982043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.397574  107570 preemption_test.go:567] Creating additional pods...
I0513 18:40:32.399142  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.881724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54432]
I0513 18:40:32.400476  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.681101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.400778  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (3.680867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54430]
I0513 18:40:32.402980  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.807074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54390]
I0513 18:40:32.403230  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod/status: (6.959485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54412]
I0513 18:40:32.405464  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.131012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54430]
I0513 18:40:32.405689  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.351178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54432]
I0513 18:40:32.405729  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0513 18:40:32.405880  107570 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0513 18:40:32.405896  107570 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0513 18:40:32.407667  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.625443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54432]
I0513 18:40:32.407702  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod/status: (1.565125ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54430]
I0513 18:40:32.410047  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.605491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54430]
I0513 18:40:32.412164  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.7594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54430]
I0513 18:40:32.413772  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/rpod-0: (5.401901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54432]
I0513 18:40:32.413966  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.391617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54430]
I0513 18:40:32.414319  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:32.414381  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:32.414545  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.414616  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.417040  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (1.689286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54432]
I0513 18:40:32.417115  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.715137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54430]
I0513 18:40:32.418708  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.313312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54430]
I0513 18:40:32.419006  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0/status: (2.323142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54434]
I0513 18:40:32.419500  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.854499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54432]
I0513 18:40:32.421076  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.66303ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54430]
I0513 18:40:32.421774  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (1.220349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54432]
I0513 18:40:32.422021  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.422189  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:32.422242  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:32.422362  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.422426  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.423517  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.001348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54430]
I0513 18:40:32.424335  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (1.589278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54434]
I0513 18:40:32.424747  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.535052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54436]
I0513 18:40:32.425885  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.830755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54430]
I0513 18:40:32.425972  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1/status: (3.322803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54432]
I0513 18:40:32.427215  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (882.494µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54436]
I0513 18:40:32.427475  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.427689  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.306885ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54434]
I0513 18:40:32.427775  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:32.427798  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:32.427968  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.428018  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.430258  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.66576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54438]
I0513 18:40:32.430747  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.434444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54436]
I0513 18:40:32.430860  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (1.862102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54440]
I0513 18:40:32.431226  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2/status: (2.994004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54434]
I0513 18:40:32.432487  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.291165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54436]
I0513 18:40:32.433464  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (1.852101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54434]
I0513 18:40:32.433759  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.434221  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:32.434243  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:32.434333  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.434452  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.434750  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.398455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54436]
I0513 18:40:32.435989  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (1.299722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54438]
I0513 18:40:32.436392  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3/status: (1.695237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54434]
I0513 18:40:32.436973  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.272287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54442]
I0513 18:40:32.438381  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.604149ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54436]
I0513 18:40:32.439172  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (1.00702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54438]
I0513 18:40:32.439385  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.954267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54442]
I0513 18:40:32.439549  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.439758  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:32.439777  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:32.439928  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.439976  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.441413  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (1.091105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54444]
I0513 18:40:32.441614  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.809297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54436]
I0513 18:40:32.442458  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4/status: (1.387958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54446]
I0513 18:40:32.442490  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.690649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54448]
I0513 18:40:32.443480  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.272928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54436]
I0513 18:40:32.443844  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (915.962µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54448]
I0513 18:40:32.444104  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.444260  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:32.444294  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:32.444409  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.444488  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.445358  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.319701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54436]
I0513 18:40:32.445738  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (1.049675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54448]
I0513 18:40:32.446666  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5/status: (1.774243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54444]
I0513 18:40:32.447892  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.759315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54450]
I0513 18:40:32.448353  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.920145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54436]
I0513 18:40:32.448687  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (1.635716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54444]
I0513 18:40:32.449111  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.449248  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:32.449261  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:32.449329  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.449364  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.451519  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6/status: (1.922906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54444]
I0513 18:40:32.451809  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (2.069938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54448]
I0513 18:40:32.452089  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (3.094137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54450]
I0513 18:40:32.454069  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.631396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54452]
I0513 18:40:32.454117  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.577978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54448]
I0513 18:40:32.454082  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (1.238711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54450]
I0513 18:40:32.454390  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.454537  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:32.454554  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:32.455083  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.455130  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.455883  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.396189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54448]
I0513 18:40:32.456540  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (1.053509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54452]
I0513 18:40:32.456736  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (1.060827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0513 18:40:32.456888  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.457087  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:32.457125  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:32.457312  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.457802  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.458459  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.968723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54448]
I0513 18:40:32.458308  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-0.159e52233cd37bcf: (1.927147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54456]
I0513 18:40:32.460123  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (1.994558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0513 18:40:32.460753  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.756536ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54448]
I0513 18:40:32.461160  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.875893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54456]
I0513 18:40:32.461299  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7/status: (2.281859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54452]
I0513 18:40:32.462725  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (978.157µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54456]
I0513 18:40:32.463329  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.408553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0513 18:40:32.463660  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.463899  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:32.463920  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:32.464006  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.464049  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.465050  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.380682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0513 18:40:32.465943  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8/status: (1.682398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54456]
I0513 18:40:32.467096  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.772339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54454]
I0513 18:40:32.467525  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (2.639176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54458]
I0513 18:40:32.467568  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.999717ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54460]
E0513 18:40:32.468674  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:32.469596  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (1.029148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54456]
I0513 18:40:32.469891  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.470081  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:32.470101  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:32.470181  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.470228  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.470489  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.670719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54462]
I0513 18:40:32.472804  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.884596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54462]
I0513 18:40:32.473118  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9/status: (2.640696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54456]
I0513 18:40:32.473229  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.249142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54466]
I0513 18:40:32.473339  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (2.657884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54464]
E0513 18:40:32.474556  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:32.475719  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (1.210397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54464]
I0513 18:40:32.476160  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.515458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54456]
I0513 18:40:32.476303  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.476700  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:32.476766  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:32.476929  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.476989  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.478786  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (1.558919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54464]
I0513 18:40:32.479187  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.236648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54456]
I0513 18:40:32.480360  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.681782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54470]
I0513 18:40:32.480842  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10/status: (2.284241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54468]
I0513 18:40:32.481941  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.25334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54456]
I0513 18:40:32.483151  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (1.70055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54470]
I0513 18:40:32.483394  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.483603  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:32.483665  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:32.483788  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.483843  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.485015  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.723227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54456]
I0513 18:40:32.486152  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.69658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54472]
I0513 18:40:32.490456  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (4.987819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54456]
I0513 18:40:32.490694  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (6.62408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54470]
I0513 18:40:32.493184  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.76878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54456]
I0513 18:40:32.494089  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11/status: (1.808785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54464]
I0513 18:40:32.495663  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (1.179139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54464]
I0513 18:40:32.496065  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.496220  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.058661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54470]
I0513 18:40:32.496337  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:32.496379  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:32.496506  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.496563  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.498095  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (1.142225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54472]
I0513 18:40:32.498632  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.45658ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54474]
I0513 18:40:32.500206  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.724477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54472]
I0513 18:40:32.502599  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.564014ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54472]
I0513 18:40:32.504769  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.593795ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54472]
I0513 18:40:32.505186  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12/status: (8.236707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54464]
I0513 18:40:32.506556  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (1.035877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54464]
I0513 18:40:32.506980  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.507319  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:32.507349  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:32.507445  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.507488  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.507541  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.26874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54472]
I0513 18:40:32.508917  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (1.167854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54464]
I0513 18:40:32.509373  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.577513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54474]
I0513 18:40:32.509800  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13/status: (1.740716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54478]
I0513 18:40:32.510397  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.328812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54472]
I0513 18:40:32.511318  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (937.75µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54474]
I0513 18:40:32.511527  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.511688  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:32.511701  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:32.511798  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.511859  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.512536  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.74401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54472]
I0513 18:40:32.513476  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (1.376552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54464]
I0513 18:40:32.513904  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.434258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54480]
I0513 18:40:32.513971  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14/status: (1.62792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54474]
I0513 18:40:32.514600  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.339994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54472]
I0513 18:40:32.515157  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (867.5µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54474]
I0513 18:40:32.515387  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.515532  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:32.515563  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:32.515658  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.515697  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.516756  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.440966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54472]
I0513 18:40:32.518618  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (2.587869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54464]
I0513 18:40:32.519027  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.741896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54482]
I0513 18:40:32.519046  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15/status: (3.147485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54474]
I0513 18:40:32.519046  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.94693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54472]
I0513 18:40:32.520681  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (1.027908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54482]
I0513 18:40:32.520967  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.521215  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:32.521235  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:32.521264  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.609688ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54464]
I0513 18:40:32.521336  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.521392  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.523782  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.813126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54484]
I0513 18:40:32.523998  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (2.386106ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54482]
I0513 18:40:32.524212  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (2.053232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54486]
I0513 18:40:32.524242  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16/status: (2.661111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54464]
I0513 18:40:32.525671  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (1.077143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54482]
I0513 18:40:32.525919  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods: (1.290542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54484]
I0513 18:40:32.525954  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.526142  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:32.526163  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:32.526275  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.526317  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.527418  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (903.359µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54484]
I0513 18:40:32.527939  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17/status: (1.391806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54482]
I0513 18:40:32.528783  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.962663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54488]
I0513 18:40:32.529317  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (1.033766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54482]
I0513 18:40:32.529570  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.529773  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:32.529790  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:32.529941  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.529990  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.531260  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (1.020824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54484]
I0513 18:40:32.531731  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.233042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54490]
I0513 18:40:32.532253  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18/status: (2.033807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54488]
I0513 18:40:32.533949  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (1.081327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54490]
I0513 18:40:32.534186  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.534374  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:32.534390  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:32.534493  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.534539  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.535740  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (946.789µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54484]
I0513 18:40:32.536717  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.704245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54492]
I0513 18:40:32.536959  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19/status: (2.13402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54490]
I0513 18:40:32.538370  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (1.014698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54492]
I0513 18:40:32.538679  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.538885  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:32.538905  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:32.539003  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.539051  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.540407  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (1.09477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54484]
I0513 18:40:32.541221  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.623246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54494]
I0513 18:40:32.541777  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20/status: (2.508496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54492]
I0513 18:40:32.543236  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (999.679µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54494]
I0513 18:40:32.543540  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.543755  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:32.543772  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:32.543920  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.543971  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.545415  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (1.155115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54484]
I0513 18:40:32.545905  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.42544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54496]
I0513 18:40:32.545977  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21/status: (1.784439ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54494]
I0513 18:40:32.547448  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (949.033µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54494]
I0513 18:40:32.547753  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.547993  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:32.548011  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:32.548104  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.548151  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.549403  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (998.657µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54484]
I0513 18:40:32.550175  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.568015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54498]
I0513 18:40:32.550983  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22/status: (2.558832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54496]
I0513 18:40:32.553232  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (1.70653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54498]
I0513 18:40:32.553471  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.553684  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:32.553702  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:32.553788  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.553879  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.555408  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (1.10243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54484]
I0513 18:40:32.555942  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23/status: (1.62028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54498]
I0513 18:40:32.557106  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.696461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54500]
I0513 18:40:32.558087  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (1.329609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54498]
I0513 18:40:32.558343  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.558624  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:32.558691  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:32.558792  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.558847  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.560446  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (1.001708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54484]
I0513 18:40:32.560900  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24/status: (1.807648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54500]
I0513 18:40:32.560917  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.341671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54502]
I0513 18:40:32.562289  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (950.578µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54500]
I0513 18:40:32.562545  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.562760  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:32.562780  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:32.562906  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.562947  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.564347  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (1.068078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54500]
I0513 18:40:32.564922  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.549899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54504]
I0513 18:40:32.564925  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25/status: (1.604355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54484]
I0513 18:40:32.566318  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (933.79µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54504]
I0513 18:40:32.566591  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.566767  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:32.566782  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:32.566913  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.566961  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.568786  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.239218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54506]
I0513 18:40:32.569215  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (1.739028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54500]
E0513 18:40:32.569495  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:32.569697  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26/status: (2.539707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54504]
I0513 18:40:32.571217  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (1.068401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54500]
I0513 18:40:32.571437  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.571644  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:32.571671  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:32.571783  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.571873  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.573048  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (991.264µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54500]
I0513 18:40:32.573720  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.355154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54508]
I0513 18:40:32.573878  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27/status: (1.810161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54506]
I0513 18:40:32.575449  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (994.457µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54508]
I0513 18:40:32.575736  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.575918  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:32.575934  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8
I0513 18:40:32.576035  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.576082  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.577285  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (1.008036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54508]
I0513 18:40:32.577511  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.577661  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:32.577683  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:32.577767  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.577811  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.578916  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-8.159e52233fc5e6cb: (1.932942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54510]
I0513 18:40:32.579194  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (2.865139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54500]
I0513 18:40:32.579317  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.019892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54512]
I0513 18:40:32.579906  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28/status: (1.858312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54508]
I0513 18:40:32.580907  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.414858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54510]
I0513 18:40:32.581534  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.025321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54512]
I0513 18:40:32.582621  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.582858  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:32.582881  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:32.582993  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.583034  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.584467  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (1.181048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54500]
I0513 18:40:32.585067  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.53623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54514]
I0513 18:40:32.585095  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29/status: (1.851443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54510]
I0513 18:40:32.586481  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (1.024253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54514]
I0513 18:40:32.586755  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.586976  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:32.586995  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:32.587119  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.587165  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.588394  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (1.01687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54500]
I0513 18:40:32.589163  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30/status: (1.791441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54514]
I0513 18:40:32.590773  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (1.076026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54514]
I0513 18:40:32.591005  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.593392  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:32.593531  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9
I0513 18:40:32.593715  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.593795  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.594039  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (6.261431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54516]
I0513 18:40:32.595502  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (1.357747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54514]
I0513 18:40:32.595748  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.595941  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:32.595959  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:32.596048  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.596089  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.596097  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (1.958355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54500]
I0513 18:40:32.596546  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-9.159e522340242c29: (1.962506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54516]
I0513 18:40:32.598049  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (1.447542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54514]
I0513 18:40:32.599420  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.653495ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54516]
I0513 18:40:32.600143  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31/status: (2.798155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54500]
I0513 18:40:32.601669  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (1.081218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54514]
I0513 18:40:32.601993  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.602178  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:32.602197  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:32.602271  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.602319  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.604305  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (1.073242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54518]
I0513 18:40:32.604851  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32/status: (2.303843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54514]
I0513 18:40:32.609365  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (6.331561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54520]
I0513 18:40:32.610244  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (5.008968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54514]
I0513 18:40:32.610546  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.610796  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:32.610831  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:32.610946  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.610992  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.613725  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.017433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54522]
I0513 18:40:32.614058  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (2.366881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54518]
I0513 18:40:32.614432  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33/status: (3.174341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54520]
E0513 18:40:32.614727  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:32.616025  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (1.037959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54518]
I0513 18:40:32.616284  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.616466  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:32.616483  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:32.616620  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.616710  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.618193  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (975.706µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54522]
I0513 18:40:32.618878  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34/status: (1.619305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54518]
I0513 18:40:32.619245  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.902511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54524]
I0513 18:40:32.620503  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (1.161085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54518]
I0513 18:40:32.620783  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.620987  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:32.621004  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:32.621116  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.621160  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.622367  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (957.41µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54522]
I0513 18:40:32.623519  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.658488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54526]
I0513 18:40:32.623975  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35/status: (2.593984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54524]
I0513 18:40:32.625497  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (1.087867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54526]
I0513 18:40:32.625768  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.625948  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:32.625965  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:32.626072  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.626119  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.628387  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36/status: (1.967214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54526]
I0513 18:40:32.628846  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (2.490875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54522]
I0513 18:40:32.629009  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.119669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54528]
I0513 18:40:32.630470  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (1.427148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54530]
I0513 18:40:32.630509  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (1.422059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54526]
I0513 18:40:32.630788  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.631316  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:32.631336  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:32.631548  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.631683  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.633702  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (1.546719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54532]
I0513 18:40:32.635172  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37/status: (3.145584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54528]
I0513 18:40:32.635349  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.877371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54534]
I0513 18:40:32.636960  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (1.287633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54528]
I0513 18:40:32.637193  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.637384  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:32.637406  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:32.637536  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.637581  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.639724  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (1.708349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54532]
I0513 18:40:32.640091  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38/status: (2.269861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54528]
I0513 18:40:32.640637  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.349063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54536]
I0513 18:40:32.668330  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (27.720368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54528]
I0513 18:40:32.668913  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.669109  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:32.669131  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:32.669246  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.669909  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.684169  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:32.684667  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:32.686593  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:32.689508  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (16.654946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54532]
I0513 18:40:32.689884  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (15.887501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54538]
I0513 18:40:32.690361  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:32.690475  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39/status: (19.357198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54536]
I0513 18:40:32.696704  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (5.688281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54538]
I0513 18:40:32.697071  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.698163  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:32.698710  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:32.698732  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:32.698882  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.698933  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.702602  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40/status: (3.381636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54538]
I0513 18:40:32.703199  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (3.053116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54532]
I0513 18:40:32.703599  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.852775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54540]
I0513 18:40:32.705485  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (1.205798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54532]
I0513 18:40:32.705808  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.706072  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:32.706120  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:32.706282  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.706369  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.708725  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.689179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54542]
I0513 18:40:32.709325  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (2.345788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54540]
I0513 18:40:32.710683  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41/status: (3.046882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54532]
I0513 18:40:32.713762  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (1.736595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54542]
I0513 18:40:32.713997  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.714148  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:32.714165  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:32.714250  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.714289  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.719713  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (5.032696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54540]
I0513 18:40:32.719741  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42/status: (5.246162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54542]
I0513 18:40:32.723059  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (3.831013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54546]
E0513 18:40:32.723384  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:32.723715  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (3.401898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54542]
I0513 18:40:32.724063  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.725770  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:32.725967  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:32.726351  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.726558  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.733298  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (2.337268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54550]
I0513 18:40:32.733751  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (3.733841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54540]
I0513 18:40:32.735337  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.706462ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54548]
I0513 18:40:32.738516  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43/status: (11.268928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54546]
I0513 18:40:32.746555  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (6.991388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54540]
I0513 18:40:32.746936  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.747117  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:32.747134  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:32.747231  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.747276  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.750798  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (2.827688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54550]
I0513 18:40:32.751528  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44/status: (3.119849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54540]
I0513 18:40:32.753095  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.621312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54552]
I0513 18:40:32.758769  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (5.356556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54540]
I0513 18:40:32.759212  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.761117  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:32.761154  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:32.761297  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.761356  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.763862  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (1.62766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54550]
I0513 18:40:32.766890  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45/status: (4.190332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54552]
I0513 18:40:32.767861  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (5.595497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54554]
I0513 18:40:32.769572  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (1.502916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54552]
I0513 18:40:32.769931  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.770097  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:32.770109  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:32.770188  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.770224  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.774164  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (3.257279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54550]
I0513 18:40:32.775103  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46/status: (3.800607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54554]
I0513 18:40:32.779088  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (2.582424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54554]
I0513 18:40:32.779277  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (7.035066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54556]
I0513 18:40:32.779566  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.780098  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:32.780144  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:32.780334  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.780408  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.782844  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (1.821968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54550]
I0513 18:40:32.784984  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.330342ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54558]
I0513 18:40:32.785802  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47/status: (4.668397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54554]
I0513 18:40:32.787526  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (1.260461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54558]
I0513 18:40:32.787863  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.788089  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:32.788131  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:32.788258  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.788328  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.794056  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (4.900979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:32.794385  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (5.39061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54550]
E0513 18:40:32.794674  107570 factory.go:686] pod is already present in the activeQ
I0513 18:40:32.794796  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48/status: (5.754242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54558]
I0513 18:40:32.806963  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (11.556097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54550]
I0513 18:40:32.807361  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.807636  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:32.807674  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:32.807793  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.807857  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.820998  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (3.56955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:32.821864  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.486812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54566]
I0513 18:40:32.821864  107570 wrap.go:47] PUT /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49/status: (3.752735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54550]
I0513 18:40:32.823662  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (1.319316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54566]
I0513 18:40:32.824032  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.824217  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:32.824237  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:32.824330  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.824386  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.826089  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (1.347115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54566]
I0513 18:40:32.826166  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (1.417119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:32.826332  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.826504  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:32.826520  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:32.826607  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.826693  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.827719  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-26.159e522345e83ba9: (2.625836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:32.829405  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (1.773253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54566]
I0513 18:40:32.829752  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (2.834178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:32.830026  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.830167  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-33.159e5223488811f3: (1.882236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:32.830197  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:32.830308  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:32.830412  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.830463  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.831627  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.850122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54566]
I0513 18:40:32.834940  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-39.159e52234c0adaee: (3.521226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:32.835015  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (3.988278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:32.835517  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.835585  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (3.048765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54566]
I0513 18:40:32.835709  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:32.835753  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:32.835879  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.836042  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.837659  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (1.416388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:32.838250  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (2.017929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:32.838494  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.838529  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-42.159e52234eb047c1: (1.842976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54572]
I0513 18:40:32.838694  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:32.838713  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:32.838777  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:32.838857  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:32.841059  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (1.649051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:32.841156  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (1.746486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:32.841338  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:32.841880  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-48.159e52235319f4f1: (2.0097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54572]
I0513 18:40:32.931881  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (2.013958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:33.031622  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.744615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:33.132149  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (2.242142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:33.232079  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (2.174135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:33.331703  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.801521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:33.431906  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.948619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:33.531877  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (1.961389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:33.577310  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod
I0513 18:40:33.577349  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod
I0513 18:40:33.577518  107570 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod", node "node1"
I0513 18:40:33.577538  107570 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0513 18:40:33.577587  107570 factory.go:711] Attempting to bind preemptor-pod to node1
I0513 18:40:33.577638  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:33.577667  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:33.577808  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.577975  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.580703  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (2.324246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54616]
I0513 18:40:33.581006  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (2.629463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:33.581208  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod/binding: (3.232639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54570]
I0513 18:40:33.581309  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-1.159e52233d4abe60: (1.792578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54618]
I0513 18:40:33.581440  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.581711  107570 scheduler.go:570] pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0513 18:40:33.581890  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:33.581939  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:33.582040  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.582078  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.584761  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (2.030023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54620]
I0513 18:40:33.585184  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.871829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:33.585208  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (2.80329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54616]
I0513 18:40:33.585460  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.585584  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:33.585594  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:33.585687  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.585729  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.588737  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-2.159e52233da022e5: (2.273359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:33.589093  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (2.588023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54622]
I0513 18:40:33.589317  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (3.010361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54616]
I0513 18:40:33.591157  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.591964  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:33.591986  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:33.592085  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.592122  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.593478  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-3.159e52233e01a443: (3.251994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:33.594970  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (2.005092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54622]
I0513 18:40:33.595077  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (1.428622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54620]
I0513 18:40:33.595206  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.595366  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:33.595415  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:33.595505  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.595542  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.596483  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-4.159e52233e569449: (1.848708ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:33.598220  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (2.404929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54620]
I0513 18:40:33.598220  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (2.379523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54622]
I0513 18:40:33.598606  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.598783  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:33.598805  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:33.598912  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.598959  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.599598  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-5.159e52233e9b61b2: (2.525694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:33.601576  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (2.401855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54620]
I0513 18:40:33.603083  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (1.278408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54620]
I0513 18:40:33.603198  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-6.159e52233ee5e4a9: (1.861666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54560]
I0513 18:40:33.603303  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.603459  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:33.603482  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0
I0513 18:40:33.603577  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.603611  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.605175  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (931.413µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54622]
I0513 18:40:33.605415  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (1.593285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54620]
I0513 18:40:33.605610  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.605773  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:33.605788  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7
I0513 18:40:33.605891  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.605928  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.608478  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-0.159e52233cd37bcf: (4.058608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54648]
I0513 18:40:33.611383  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (2.826902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54622]
I0513 18:40:33.611723  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-7.159e52233f66725e: (2.7525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54648]
I0513 18:40:33.611802  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (5.724756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54620]
I0513 18:40:33.613370  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.613758  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:33.613803  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10
I0513 18:40:33.613986  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.614033  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.618713  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-10.159e5223408b4a76: (3.010013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54764]
I0513 18:40:33.618728  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (3.905563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54622]
I0513 18:40:33.619057  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (3.822992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54620]
I0513 18:40:33.621052  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.621366  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:33.621391  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11
I0513 18:40:33.622970  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.623036  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.625327  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (2.080171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54764]
I0513 18:40:33.625387  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (1.515695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54622]
I0513 18:40:33.625712  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.625957  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:33.626146  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:33.626299  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.626465  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-11.159e522340f3e7a5: (2.555046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54788]
I0513 18:40:33.627088  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.628774  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (1.234602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54622]
I0513 18:40:33.628893  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (1.493411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54764]
I0513 18:40:33.629217  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.629470  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:33.629484  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:33.629562  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.629613  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.631485  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (1.155028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54818]
I0513 18:40:33.631988  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/preemptor-pod: (2.28295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54622]
I0513 18:40:33.632450  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-12.159e522341b60431: (4.354657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54798]
I0513 18:40:33.634211  107570 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0513 18:40:33.637010  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-13.159e5223425cbcfe: (2.922233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54798]
I0513 18:40:33.638595  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (8.353946ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54816]
I0513 18:40:33.638904  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.639072  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:33.639095  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:33.639185  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.639230  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.640261  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (5.400518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54622]
I0513 18:40:33.642187  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (1.300257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54818]
I0513 18:40:33.642383  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (1.040313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54764]
I0513 18:40:33.644141  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (2.342819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54622]
I0513 18:40:33.645507  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (990.962µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54764]
I0513 18:40:33.647802  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (1.865161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54764]
I0513 18:40:33.648568  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-14.159e5223429f67ac: (6.830823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54840]
I0513 18:40:33.649121  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.649297  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:33.649318  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:33.649433  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (1.241115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54764]
I0513 18:40:33.649858  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.649914  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.653140  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-15.159e522342d9fd72: (2.3433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54874]
I0513 18:40:33.655687  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (5.252445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54870]
I0513 18:40:33.656354  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (5.924234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54818]
I0513 18:40:33.656633  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (6.10388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54840]
I0513 18:40:33.657102  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.657335  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:33.657362  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:33.657481  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.657531  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.659474  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (2.136069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54870]
I0513 18:40:33.659765  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (1.608704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54840]
I0513 18:40:33.660016  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (1.926451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54874]
I0513 18:40:33.661055  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.661206  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:33.661263  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:33.661393  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.661488  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.663472  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-16.159e52234330e302: (3.363495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54818]
I0513 18:40:33.663478  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (1.316062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54898]
I0513 18:40:33.663644  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (2.259509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54870]
I0513 18:40:33.664020  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (2.243319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54840]
I0513 18:40:33.664296  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.664507  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:33.664542  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:33.664623  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.664675  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.666389  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (1.38729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54874]
I0513 18:40:33.666595  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (1.110842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54900]
I0513 18:40:33.666874  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (1.748157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54870]
I0513 18:40:33.667194  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.667373  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:33.667431  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:33.667604  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.667704  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.670069  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-17.159e5223437c0768: (5.394751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54898]
I0513 18:40:33.670072  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (2.881923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54900]
I0513 18:40:33.671450  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (1.448137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54874]
I0513 18:40:33.671719  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (1.241876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54900]
I0513 18:40:33.671746  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.672022  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (1.126672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54904]
I0513 18:40:33.672081  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:33.672167  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:33.672286  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.672354  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.674948  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (1.965257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54906]
I0513 18:40:33.675063  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-18.159e522343b4133d: (3.002639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54898]
I0513 18:40:33.675220  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (2.220468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54902]
I0513 18:40:33.675691  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.675937  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (3.363095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54900]
I0513 18:40:33.675957  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:33.675974  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:33.676072  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.676113  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.684452  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:33.684854  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:33.686726  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:33.690527  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:33.698851  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-19.159e522343f97a17: (23.166657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54902]
I0513 18:40:33.699256  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (22.936398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54898]
I0513 18:40:33.700383  107570 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0513 18:40:33.700405  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (23.838776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54908]
I0513 18:40:33.700792  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (22.541831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54906]
I0513 18:40:33.707977  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-20.159e5223443e5767: (8.331507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54898]
I0513 18:40:33.708497  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (8.763964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54902]
I0513 18:40:33.708530  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.708729  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:33.708807  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:33.708921  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.708963  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.713294  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (4.051954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54908]
I0513 18:40:33.713429  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (3.739056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54906]
I0513 18:40:33.713621  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (3.843379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54898]
I0513 18:40:33.714492  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-21.159e52234489679e: (3.560171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54910]
I0513 18:40:33.714579  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.715137  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:33.715153  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:33.715240  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.715284  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.718469  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-22.159e522344c934aa: (2.782302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54906]
I0513 18:40:33.718894  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (4.026239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54908]
I0513 18:40:33.719148  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (3.686222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54898]
I0513 18:40:33.719396  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (3.340755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.721025  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (1.079014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54908]
I0513 18:40:33.722523  107570 cacher.go:739] cacher (*core.Event): 1 objects queued in incoming channel.
I0513 18:40:33.722964  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (1.458195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54908]
I0513 18:40:33.723320  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-23.159e522345207bc7: (2.544814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.725721  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (1.722726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.726704  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.727230  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (1.075984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.727394  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:33.727882  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:33.728075  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.728223  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.744851  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (5.403635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54914]
I0513 18:40:33.745259  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (5.256401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54898]
I0513 18:40:33.745554  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (4.379563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54918]
I0513 18:40:33.746197  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-24.159e5223456c696b: (4.67538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.747280  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.748124  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:33.748143  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:33.748256  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.748301  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.749507  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (1.791732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54918]
I0513 18:40:33.753605  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-25.159e522345ab03e0: (3.539843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54920]
I0513 18:40:33.755613  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (6.345833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.756422  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (6.801755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54898]
I0513 18:40:33.756751  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.756994  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:33.757046  107570 scheduler.go:452] Attempting to schedule pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:33.757168  107570 factory.go:649] Unable to schedule preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0513 18:40:33.757236  107570 factory.go:720] Updating pod condition for preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0513 18:40:33.757675  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (7.613347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54918]
I0513 18:40:33.760284  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (1.644493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54924]
I0513 18:40:33.760565  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (2.553207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54920]
I0513 18:40:33.760720  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (2.35244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.762330  107570 wrap.go:47] PATCH /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events/ppod-27.159e52234632f9e2: (3.264291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54918]
I0513 18:40:33.762515  107570 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0513 18:40:33.765681  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (3.334628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54920]
I0513 18:40:33.769833  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (1.233902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.771467  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (1.142783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.773969  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (2.005633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.775497  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (1.017834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.777185  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (1.107468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.778660  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (901.212µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.783126  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (3.863392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.785498  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (1.885232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.790056  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (1.178003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.791602  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (1.08594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.793093  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (1.032987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.794632  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (1.156642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.795936  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (922.593µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.797268  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (972.808µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.800811  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (2.641692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.803091  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (1.673401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.804774  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (1.20134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.807357  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (2.124874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.808886  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (982.515µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.810375  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (944.647µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.814592  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (1.040151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.816243  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (1.13205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.818861  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (1.137376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.820422  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (1.125639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.822128  107570 wrap.go:47] GET /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-49: (1.137257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.822498  107570 preemption_test.go:598] Cleaning up all pods...
I0513 18:40:33.832222  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-0: (8.249476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.839193  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-1: (6.303011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.843643  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:33.843763  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-1
I0513 18:40:33.844198  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:33.844273  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-2
I0513 18:40:33.846712  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.293808ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.850925  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.6241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.855763  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-2: (16.233692ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.860080  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:33.860125  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-3
I0513 18:40:33.862977  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.585199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.864424  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-3: (8.161784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.869176  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:33.870142  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-4
I0513 18:40:33.869752  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-4: (4.99691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.872595  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.960861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.874503  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:33.874581  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-5
I0513 18:40:33.876255  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-5: (5.505121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.877729  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.535895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.881469  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:33.881547  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-6
I0513 18:40:33.886922  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-6: (8.706393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.888033  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (6.225487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.892807  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-7: (5.560902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.897691  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-8: (4.574751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.904034  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-9: (5.894327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.910574  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-10: (5.81916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.915146  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-11: (3.966244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.928113  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:33.928162  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-12
I0513 18:40:33.931355  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-12: (15.762431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.932377  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.631646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.935333  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:33.935373  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-13
I0513 18:40:33.937289  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.635843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.938139  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-13: (5.877379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.943373  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:33.943483  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-14
I0513 18:40:33.945636  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.5128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.946407  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-14: (7.861345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.962458  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:33.962506  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-15
I0513 18:40:33.964642  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.719587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.965195  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-15: (18.483115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.969406  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:33.969483  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-16
I0513 18:40:33.971587  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.685764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.972749  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-16: (7.211241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.975687  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:33.975839  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-17
I0513 18:40:33.976733  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-17: (3.629226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.977927  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.774875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.981596  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:33.981638  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-18
I0513 18:40:33.982609  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-18: (3.804908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.983366  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.490969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.985559  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:33.985591  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-19
I0513 18:40:33.988612  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.426856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.988614  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-19: (5.366876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.991574  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:33.991787  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-20
I0513 18:40:33.992899  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-20: (3.852785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.994045  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.870638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:33.996150  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:33.996377  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-21
I0513 18:40:33.997590  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-21: (3.846971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:33.998200  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.44868ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.000361  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:34.000400  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-22
I0513 18:40:34.001678  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-22: (3.624834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.002246  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.588755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.004475  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:34.004511  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-23
I0513 18:40:34.006407  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.696701ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.007333  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-23: (5.298683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.011841  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:34.011911  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-24
I0513 18:40:34.013526  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-24: (4.79809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.019269  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:34.019357  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-25
I0513 18:40:34.021010  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (7.90545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.022404  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-25: (7.528743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.024752  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.257458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.027552  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:34.027585  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-26
I0513 18:40:34.029720  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-26: (6.971124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.038394  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (6.751914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.038757  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:34.038785  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-27
I0513 18:40:34.040510  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.446207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.041728  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-27: (11.463572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.044902  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:34.045016  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-28
I0513 18:40:34.046360  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-28: (4.302915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.048971  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:34.049002  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-29
I0513 18:40:34.049969  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (4.638202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.055346  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-29: (8.563432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.058471  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:34.058514  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-30
I0513 18:40:34.060405  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-30: (4.670648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.063857  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:34.063904  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-31
I0513 18:40:34.065779  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-31: (4.69885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.067168  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (15.729468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.071587  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:34.071686  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-32
I0513 18:40:34.075607  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (7.240753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.080547  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-32: (14.307598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.082602  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.668832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.087805  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (4.663999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.088938  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:34.088969  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-33
I0513 18:40:34.090401  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-33: (6.975118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.095038  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.834915ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.095690  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:34.095732  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-34
I0513 18:40:34.097018  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-34: (6.152965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.097559  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.508272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.100161  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:34.100252  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-35
I0513 18:40:34.101295  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-35: (3.956605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.101759  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.22008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.104181  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:34.104224  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-36
I0513 18:40:34.107167  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.584898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.107519  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-36: (5.802867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.111888  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:34.111927  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-37
I0513 18:40:34.114349  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-37: (6.465813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.115107  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.863372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.117566  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:34.117600  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-38
I0513 18:40:34.119088  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-38: (4.124994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.119901  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.667329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.122505  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:34.122752  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-39
I0513 18:40:34.123834  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-39: (4.390309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.124476  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.348434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.127229  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:34.127302  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-40
I0513 18:40:34.128382  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-40: (3.669385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.129087  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.525296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.132154  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:34.132204  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-41
I0513 18:40:34.135195  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-41: (5.915233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.137658  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.752952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.138925  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:34.139012  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-42
I0513 18:40:34.140159  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-42: (4.361226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.141576  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.309722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.148238  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:34.148290  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-43
I0513 18:40:34.151712  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-43: (11.09472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.152167  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (3.446492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.155287  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:34.155363  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-44
I0513 18:40:34.157367  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.705274ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.157877  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-44: (5.756608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.161720  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:34.161998  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-45
I0513 18:40:34.163779  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-45: (5.528885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.166907  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:34.167060  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-46
I0513 18:40:34.168620  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-46: (4.453133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.169382  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (5.371002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.171799  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.042022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.172516  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:34.172561  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-47
I0513 18:40:34.174989  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-47: (5.4778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.175587  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (2.638538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.177879  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:34.178852  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-48
I0513 18:40:34.179433  107570 wrap.go:47] DELETE /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/pods/ppod-48: (4.05406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54912]
I0513 18:40:34.181630  107570 wrap.go:47] POST /api/v1/namespaces/preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/events: (1.684182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54922]
I0513 18:40:34.183881  107570 scheduling_queue.go:795] About to try and schedule pod preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49
I0513 18:40:34.183918  107570 scheduler.go:448] Skip schedule deleting pod: preemption-race27900d7b-0edd-48ae-a298-f729136d37f9/ppod-49