This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdims: [WIP][DO-NOT-REVIEW] Break kubelet intertwining with other packages
ResultFAILURE
Tests 1 failed / 1429 succeeded
Started2019-05-15 22:24
Elapsed33m16s
Revision
Buildergke-prow-containerd-pool-99179761-6j45
Refs master:aaec77a9
77956:3c2fe78e
pod1d5be0c4-7760-11e9-8ee0-0a580a6c0dad
infra-commit0f0e3e066
pod1d5be0c4-7760-11e9-8ee0-0a580a6c0dad
repok8s.io/kubernetes
repo-commitfcb7f83745f7875b33c0544b64064f42a4232588
repos{u'k8s.io/kubernetes': u'master:aaec77a94b67878ca1bdd884f2778f4388d203f2,77956:3c2fe78eba1d043a9366e2e58ca65fd77bb2d458'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 31s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0515 22:49:29.041949  109070 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0515 22:49:29.042000  109070 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0515 22:49:29.042017  109070 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0515 22:49:29.042027  109070 master.go:233] Using reconciler: 
I0515 22:49:29.044204  109070 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.044344  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.044375  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.044417  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.044755  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.045109  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.045245  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.045631  109070 store.go:1320] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0515 22:49:29.045713  109070 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0515 22:49:29.045730  109070 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.046089  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.046162  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.046266  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.046425  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.047117  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.047670  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.047753  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.048035  109070 store.go:1320] Monitoring events count at <storage-prefix>//events
I0515 22:49:29.048164  109070 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0515 22:49:29.048113  109070 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.048273  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.048324  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.048413  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.048579  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.048961  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.049187  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.049187  109070 store.go:1320] Monitoring limitranges count at <storage-prefix>//limitranges
I0515 22:49:29.049246  109070 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.049316  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.049333  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.049347  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.049379  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.049480  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.049655  109070 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0515 22:49:29.049822  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.049901  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.049974  109070 store.go:1320] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0515 22:49:29.050015  109070 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0515 22:49:29.050902  109070 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.051122  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.051150  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.051258  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.051343  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.051490  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.051748  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.051787  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.051914  109070 store.go:1320] Monitoring secrets count at <storage-prefix>//secrets
I0515 22:49:29.052025  109070 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0515 22:49:29.052086  109070 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.053228  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.053296  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.053372  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.053427  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.054389  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.055318  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.055388  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.055607  109070 store.go:1320] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0515 22:49:29.055658  109070 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0515 22:49:29.055834  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.055822  109070 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.056087  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.056155  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.056229  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.056289  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.056611  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.056650  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.056774  109070 store.go:1320] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0515 22:49:29.056851  109070 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0515 22:49:29.056969  109070 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.057054  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.057846  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.057969  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.058072  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.058233  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.058388  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.058611  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.058762  109070 store.go:1320] Monitoring configmaps count at <storage-prefix>//configmaps
I0515 22:49:29.058806  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.058876  109070 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0515 22:49:29.058948  109070 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.059091  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.059106  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.059189  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.059344  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.059684  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.059836  109070 store.go:1320] Monitoring namespaces count at <storage-prefix>//namespaces
I0515 22:49:29.059926  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.060018  109070 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.060121  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.060132  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.060221  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.060307  109070 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0515 22:49:29.060370  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.060511  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.060882  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.060933  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.061163  109070 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0515 22:49:29.061076  109070 store.go:1320] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0515 22:49:29.061578  109070 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.061666  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.061895  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.062079  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.062093  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.063621  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.063688  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.064023  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.064114  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.064157  109070 store.go:1320] Monitoring nodes count at <storage-prefix>//minions
I0515 22:49:29.064226  109070 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0515 22:49:29.064331  109070 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.064415  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.064439  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.064523  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.064626  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.065162  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.065186  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.065251  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.065522  109070 store.go:1320] Monitoring pods count at <storage-prefix>//pods
I0515 22:49:29.065547  109070 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0515 22:49:29.065784  109070 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.066022  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.066054  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.066109  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.067088  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.067525  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.067884  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.068055  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.068157  109070 store.go:1320] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0515 22:49:29.068252  109070 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0515 22:49:29.068524  109070 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.068704  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.068732  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.068829  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.069074  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.069425  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.069611  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.069654  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.069856  109070 store.go:1320] Monitoring services count at <storage-prefix>//services/specs
I0515 22:49:29.069903  109070 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0515 22:49:29.069913  109070 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.070072  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.070085  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.070119  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.070183  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.070957  109070 watch_cache.go:405] Replace watchCache (rev: 25244) 
I0515 22:49:29.070975  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.071233  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.071462  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.071515  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.071553  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.071690  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.071987  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.072142  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.072439  109070 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.072666  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.072688  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.072722  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.072844  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.073180  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.073260  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.073303  109070 store.go:1320] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0515 22:49:29.073367  109070 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0515 22:49:29.076991  109070 watch_cache.go:405] Replace watchCache (rev: 25245) 
I0515 22:49:29.171032  109070 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0515 22:49:29.171185  109070 master.go:425] Enabling API group "authentication.k8s.io".
I0515 22:49:29.171264  109070 master.go:425] Enabling API group "authorization.k8s.io".
I0515 22:49:29.171706  109070 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.171997  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.172060  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.172204  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.172438  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.174907  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.175216  109070 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0515 22:49:29.175514  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.175597  109070 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0515 22:49:29.175782  109070 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.176002  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.176062  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.176170  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.176530  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.176882  109070 watch_cache.go:405] Replace watchCache (rev: 25248) 
I0515 22:49:29.177172  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.178586  109070 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0515 22:49:29.178670  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.179025  109070 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0515 22:49:29.179431  109070 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.179604  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.179632  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.179921  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.180301  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.180398  109070 watch_cache.go:405] Replace watchCache (rev: 25248) 
I0515 22:49:29.182183  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.182464  109070 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0515 22:49:29.182562  109070 master.go:425] Enabling API group "autoscaling".
I0515 22:49:29.183099  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.183670  109070 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0515 22:49:29.184308  109070 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.186655  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.184779  109070 watch_cache.go:405] Replace watchCache (rev: 25248) 
I0515 22:49:29.190743  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.191057  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.191396  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.192245  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.192601  109070 store.go:1320] Monitoring jobs.batch count at <storage-prefix>//jobs
I0515 22:49:29.193156  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.193286  109070 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0515 22:49:29.195962  109070 watch_cache.go:405] Replace watchCache (rev: 25249) 
I0515 22:49:29.205834  109070 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.206046  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.206074  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.206208  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.206327  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.212869  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.213658  109070 store.go:1320] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0515 22:49:29.213761  109070 master.go:425] Enabling API group "batch".
I0515 22:49:29.214091  109070 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.214258  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.214285  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.214371  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.214479  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.214570  109070 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0515 22:49:29.214900  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.215306  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.215464  109070 store.go:1320] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0515 22:49:29.215540  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.215617  109070 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0515 22:49:29.217152  109070 watch_cache.go:405] Replace watchCache (rev: 25249) 
I0515 22:49:29.217263  109070 watch_cache.go:405] Replace watchCache (rev: 25249) 
I0515 22:49:29.217317  109070 master.go:425] Enabling API group "certificates.k8s.io".
I0515 22:49:29.221116  109070 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.221440  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.221540  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.221691  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.221881  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.223849  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.224080  109070 store.go:1320] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0515 22:49:29.224154  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.224226  109070 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0515 22:49:29.224829  109070 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.225005  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.225064  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.225135  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.225210  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.225587  109070 watch_cache.go:405] Replace watchCache (rev: 25249) 
I0515 22:49:29.226846  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.226961  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.227211  109070 store.go:1320] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0515 22:49:29.227290  109070 master.go:425] Enabling API group "coordination.k8s.io".
I0515 22:49:29.227350  109070 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0515 22:49:29.227698  109070 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.228458  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.228635  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.228681  109070 watch_cache.go:405] Replace watchCache (rev: 25249) 
I0515 22:49:29.228777  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.229289  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.229684  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.229873  109070 store.go:1320] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0515 22:49:29.230097  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.230199  109070 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.230248  109070 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0515 22:49:29.230335  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.230361  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.230899  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.230983  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.231676  109070 watch_cache.go:405] Replace watchCache (rev: 25249) 
I0515 22:49:29.232721  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.233031  109070 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0515 22:49:29.233215  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.233321  109070 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0515 22:49:29.237045  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.237557  109070 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.237739  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.237787  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.237862  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.238016  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.239104  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.239163  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.239289  109070 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0515 22:49:29.239376  109070 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0515 22:49:29.240541  109070 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.240654  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.240674  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.240717  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.240797  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.241229  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.241695  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.241775  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.241947  109070 store.go:1320] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0515 22:49:29.241983  109070 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0515 22:49:29.242194  109070 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.242285  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.242303  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.242337  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.242413  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.242794  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.242944  109070 store.go:1320] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0515 22:49:29.243099  109070 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.243163  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.243180  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.243212  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.243270  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.243306  109070 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0515 22:49:29.243572  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.243838  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.243849  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.243945  109070 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0515 22:49:29.244090  109070 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.244154  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.244166  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.244198  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.244236  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.244264  109070 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0515 22:49:29.244487  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.244790  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.244875  109070 store.go:1320] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0515 22:49:29.244891  109070 master.go:425] Enabling API group "extensions".
I0515 22:49:29.245029  109070 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.245092  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.245102  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.245133  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.245183  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.245212  109070 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0515 22:49:29.245391  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.245678  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.245747  109070 store.go:1320] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0515 22:49:29.245884  109070 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.245946  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.245956  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.246001  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.246032  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.246061  109070 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0515 22:49:29.246286  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.246578  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.246624  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.246718  109070 store.go:1320] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0515 22:49:29.246735  109070 master.go:425] Enabling API group "networking.k8s.io".
I0515 22:49:29.246763  109070 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.246832  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.246844  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.246874  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.246934  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.246981  109070 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0515 22:49:29.247159  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.247320  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.247362  109070 store.go:1320] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0515 22:49:29.247375  109070 master.go:425] Enabling API group "node.k8s.io".
I0515 22:49:29.247405  109070 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0515 22:49:29.247559  109070 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.247625  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.247638  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.247667  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.247717  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.247933  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.248032  109070 store.go:1320] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0515 22:49:29.248035  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.248119  109070 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0515 22:49:29.248157  109070 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.248220  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.248231  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.248261  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.248343  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.248666  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.248702  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.248750  109070 store.go:1320] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0515 22:49:29.248765  109070 master.go:425] Enabling API group "policy".
I0515 22:49:29.248815  109070 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.248846  109070 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0515 22:49:29.248875  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.248885  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.248916  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.249046  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.249265  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.249290  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.249342  109070 store.go:1320] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0515 22:49:29.249406  109070 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0515 22:49:29.249480  109070 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.249571  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.249582  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.249612  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.249818  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.250097  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.250192  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.250228  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.250227  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.250255  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.250264  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.250286  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.250954  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.251836  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.251193  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.251275  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.252743  109070 store.go:1320] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0515 22:49:29.252825  109070 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.252894  109070 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0515 22:49:29.252914  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.252951  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.252998  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.253068  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.254411  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.255090  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.255926  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.256105  109070 store.go:1320] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0515 22:49:29.256194  109070 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0515 22:49:29.256294  109070 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.256390  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.256408  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.256442  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.256538  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.257049  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.257399  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.257518  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.257682  109070 store.go:1320] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0515 22:49:29.257778  109070 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0515 22:49:29.257766  109070 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.257945  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.257969  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.258019  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.258080  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.258833  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.259641  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.259684  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.260010  109070 store.go:1320] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0515 22:49:29.260163  109070 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.260244  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.260263  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.260408  109070 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0515 22:49:29.261432  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.262214  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.262318  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.262869  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.263162  109070 store.go:1320] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0515 22:49:29.263226  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.263317  109070 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0515 22:49:29.263351  109070 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.263513  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.263565  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.263650  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.264536  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.264637  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.265688  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.265767  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.265899  109070 store.go:1320] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0515 22:49:29.265978  109070 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0515 22:49:29.266119  109070 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.266238  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.266253  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.266293  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.266342  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.266674  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.266834  109070 store.go:1320] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0515 22:49:29.266870  109070 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0515 22:49:29.267017  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.267088  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.267209  109070 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0515 22:49:29.268551  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.269557  109070 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.270271  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.270333  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.270416  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.270611  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.271099  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.271230  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.271338  109070 store.go:1320] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0515 22:49:29.271419  109070 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0515 22:49:29.271556  109070 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.271659  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.271677  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.271722  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.271809  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.272166  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.272439  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.272680  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.273131  109070 store.go:1320] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0515 22:49:29.273227  109070 master.go:425] Enabling API group "scheduling.k8s.io".
I0515 22:49:29.273460  109070 master.go:417] Skipping disabled API group "settings.k8s.io".
I0515 22:49:29.273642  109070 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.273734  109070 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0515 22:49:29.273861  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.273951  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.273998  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.274063  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.274994  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.275030  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.275121  109070 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0515 22:49:29.275101  109070 store.go:1320] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0515 22:49:29.275367  109070 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.275822  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.275841  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.275875  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.275940  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.276072  109070 watch_cache.go:405] Replace watchCache (rev: 25250) 
I0515 22:49:29.276243  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.276388  109070 store.go:1320] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0515 22:49:29.276471  109070 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.276792  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.276890  109070 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0515 22:49:29.277002  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.277023  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.277062  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.277133  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.277188  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.278056  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.278259  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.278304  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.278380  109070 store.go:1320] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0515 22:49:29.278428  109070 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.278577  109070 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0515 22:49:29.278605  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.278619  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.278655  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.278812  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.279163  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.279246  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.279282  109070 store.go:1320] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0515 22:49:29.279319  109070 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0515 22:49:29.279520  109070 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.279701  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.279763  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.279863  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.279941  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.280598  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.280768  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.281616  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.281660  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.281764  109070 store.go:1320] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0515 22:49:29.281809  109070 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0515 22:49:29.281942  109070 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.282035  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.282056  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.282098  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.282272  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.282607  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.282642  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.282710  109070 store.go:1320] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0515 22:49:29.282790  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.282893  109070 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0515 22:49:29.282933  109070 master.go:425] Enabling API group "storage.k8s.io".
I0515 22:49:29.283159  109070 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.283244  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.283282  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.283328  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.283410  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.283923  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.283951  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.284087  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.284259  109070 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0515 22:49:29.284332  109070 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0515 22:49:29.284537  109070 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.284665  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.284725  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.284794  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.284906  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.284971  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.285376  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.285674  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.286050  109070 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0515 22:49:29.286130  109070 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0515 22:49:29.286332  109070 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.286427  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.286464  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.286577  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.286731  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.286987  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.287474  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.287666  109070 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0515 22:49:29.287869  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.288001  109070 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0515 22:49:29.288253  109070 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.288408  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.288433  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.288516  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.288577  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.288755  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.289905  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.289957  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.290101  109070 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0515 22:49:29.290132  109070 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0515 22:49:29.290263  109070 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.290351  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.290369  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.290404  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.290486  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.290832  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.290915  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.291000  109070 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0515 22:49:29.291146  109070 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0515 22:49:29.291158  109070 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.291242  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.291266  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.291301  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.291371  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.291806  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.292205  109070 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0515 22:49:29.292340  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.292454  109070 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.292571  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.292616  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.292659  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.292665  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.292584  109070 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0515 22:49:29.292787  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.293433  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.293834  109070 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0515 22:49:29.294004  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.294102  109070 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0515 22:49:29.294114  109070 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.294539  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.294555  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.294627  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.294795  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.295102  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.295225  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.295410  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.296423  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.296564  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.296728  109070 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0515 22:49:29.296765  109070 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0515 22:49:29.296931  109070 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.297032  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.297056  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.297096  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.297153  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.298655  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.298838  109070 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0515 22:49:29.298952  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.299086  109070 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0515 22:49:29.299109  109070 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.299828  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.299920  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.299958  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.300018  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.300086  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.300734  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.300753  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.300879  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.301139  109070 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0515 22:49:29.301196  109070 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0515 22:49:29.301343  109070 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.301476  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.301514  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.301552  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.301607  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.301888  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.302170  109070 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0515 22:49:29.302315  109070 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.302402  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.302462  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.302406  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.302524  109070 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0515 22:49:29.302530  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.303065  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.303378  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.303897  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.304065  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.303928  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.304222  109070 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0515 22:49:29.304166  109070 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0515 22:49:29.304555  109070 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.305552  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.305619  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.305687  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.305750  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.305882  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.306104  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.306162  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.306282  109070 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0515 22:49:29.306334  109070 master.go:425] Enabling API group "apps".
I0515 22:49:29.306375  109070 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0515 22:49:29.306412  109070 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.306523  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.306545  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.306581  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.306646  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.306896  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.307050  109070 store.go:1320] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0515 22:49:29.307113  109070 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.307239  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.307290  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.307302  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.307335  109070 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0515 22:49:29.307414  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.307568  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.307717  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.307870  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.308028  109070 store.go:1320] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0515 22:49:29.308054  109070 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0515 22:49:29.308082  109070 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"688a37b5-4d35-42d4-aec1-2914af0b42b7", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0515 22:49:29.308203  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.308278  109070 client.go:354] parsed scheme: ""
I0515 22:49:29.308301  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:29.308333  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:29.308379  109070 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0515 22:49:29.308525  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.308583  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.309198  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:29.309359  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
I0515 22:49:29.309394  109070 store.go:1320] Monitoring events count at <storage-prefix>//events
I0515 22:49:29.309418  109070 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0515 22:49:29.309419  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:29.310457  109070 master.go:425] Enabling API group "events.k8s.io".
I0515 22:49:29.311379  109070 watch_cache.go:405] Replace watchCache (rev: 25251) 
W0515 22:49:29.319827  109070 genericapiserver.go:347] Skipping API batch/v2alpha1 because it has no resources.
W0515 22:49:29.330255  109070 genericapiserver.go:347] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0515 22:49:29.336675  109070 genericapiserver.go:347] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0515 22:49:29.337971  109070 genericapiserver.go:347] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0515 22:49:29.341189  109070 genericapiserver.go:347] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0515 22:49:29.358732  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.358791  109070 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0515 22:49:29.358838  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.358868  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.358887  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.358898  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.359093  109070 wrap.go:47] GET /healthz: (724.081µs) 500
goroutine 36506 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0136f84d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0136f84d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012ff7780, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc016b0c328, 0xc00005e1a0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc016b0c328, 0xc00f700000)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc016b0c328, 0xc00f700000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc016b0c328, 0xc00f700000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc016b0c328, 0xc00f700000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc016b0c328, 0xc00f700000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc016b0c328, 0xc00f700000)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc016b0c328, 0xc00f700000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc016b0c328, 0xc00f700000)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc016b0c328, 0xc00f700000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc016b0c328, 0xc00f700000)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc016b0c328, 0xc00f700000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc016b0c328, 0xc017a91d00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc016b0c328, 0xc017a91d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01180f920, 0xc017da7e20, 0x73aeec0, 0xc016b0c328, 0xc017a91d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49390]
I0515 22:49:29.360098  109070 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.518982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49392]
I0515 22:49:29.363311  109070 wrap.go:47] GET /api/v1/services: (1.608782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49392]
I0515 22:49:29.367790  109070 wrap.go:47] GET /api/v1/services: (1.149154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49392]
I0515 22:49:29.370285  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.370320  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.370334  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.370344  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.370353  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.370643  109070 wrap.go:47] GET /healthz: (452.326µs) 500
goroutine 36524 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0136bcb60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0136bcb60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013bb9f00, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc011290838, 0xc009df4600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc011290838, 0xc013aac700)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc011290838, 0xc013aac700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc011290838, 0xc013aac700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc011290838, 0xc013aac700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc011290838, 0xc013aac700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc011290838, 0xc013aac700)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc011290838, 0xc013aac700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc011290838, 0xc013aac700)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc011290838, 0xc013aac700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc011290838, 0xc013aac700)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc011290838, 0xc013aac700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc011290838, 0xc013aac600)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc011290838, 0xc013aac600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011557620, 0xc017da7e20, 0x73aeec0, 0xc011290838, 0xc013aac600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49392]
I0515 22:49:29.374858  109070 wrap.go:47] GET /api/v1/services: (3.460446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49392]
I0515 22:49:29.374858  109070 wrap.go:47] GET /api/v1/services: (2.57019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:29.375996  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (5.714737ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49390]
I0515 22:49:29.378393  109070 wrap.go:47] POST /api/v1/namespaces: (1.833449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:29.379787  109070 wrap.go:47] GET /api/v1/namespaces/kube-public: (968.036µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:29.381548  109070 wrap.go:47] POST /api/v1/namespaces: (1.405921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:29.382803  109070 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (950.298µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:29.384931  109070 wrap.go:47] POST /api/v1/namespaces: (1.677685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:29.460160  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.460205  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.460221  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.460230  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.460238  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.460486  109070 wrap.go:47] GET /healthz: (497.845µs) 500
goroutine 36543 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003ef80e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003ef80e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0125b9500, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc007178360, 0xc00251a600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc007178360, 0xc00f65ec00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc007178360, 0xc00f65ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc007178360, 0xc00f65ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc007178360, 0xc00f65ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc007178360, 0xc00f65ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc007178360, 0xc00f65ec00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc007178360, 0xc00f65ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc007178360, 0xc00f65ec00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc007178360, 0xc00f65ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc007178360, 0xc00f65ec00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc007178360, 0xc00f65ec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc007178360, 0xc00f65eb00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc007178360, 0xc00f65eb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013b801e0, 0xc017da7e20, 0x73aeec0, 0xc007178360, 0xc00f65eb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49394]
I0515 22:49:29.471739  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.471805  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.471822  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.471834  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.471844  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.472072  109070 wrap.go:47] GET /healthz: (573.696µs) 500
goroutine 36495 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0179a3880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0179a3880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012780e00, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0183208d8, 0xc013276480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be900)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be900)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be900)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be900)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be800)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0183208d8, 0xc00c0be800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0107744e0, 0xc017da7e20, 0x73aeec0, 0xc0183208d8, 0xc00c0be800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:29.560139  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.560187  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.560200  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.560209  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.560218  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.560435  109070 wrap.go:47] GET /healthz: (444.19µs) 500
goroutine 36583 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004040070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004040070, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0125690a0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0111e74a8, 0xc00f5cef00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665100)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665100)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665100)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665100)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665000)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0111e74a8, 0xc00f665000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010670420, 0xc017da7e20, 0x73aeec0, 0xc0111e74a8, 0xc00f665000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49394]
I0515 22:49:29.574768  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.574794  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.574836  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.574846  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.574852  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.574999  109070 wrap.go:47] GET /healthz: (331.025µs) 500
goroutine 36497 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0179a39d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0179a39d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0127810e0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0183209e0, 0xc013276c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf200)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf200)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf200)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf200)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf100)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0183209e0, 0xc00c0bf100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc010774ba0, 0xc017da7e20, 0x73aeec0, 0xc0183209e0, 0xc00c0bf100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:29.659974  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.660016  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.660030  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.660040  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.660049  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.660215  109070 wrap.go:47] GET /healthz: (387.959µs) 500
goroutine 36600 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0136bd650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0136bd650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012615000, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0112909f0, 0xc009df5080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3aa00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3aa00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3aa00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3aa00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3a900)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0112909f0, 0xc00bf3a900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0107ea600, 0xc017da7e20, 0x73aeec0, 0xc0112909f0, 0xc00bf3a900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49394]
I0515 22:49:29.672070  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.672108  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.672121  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.672134  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.672142  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.672301  109070 wrap.go:47] GET /healthz: (399.63µs) 500
goroutine 36602 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0136bd810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0136bd810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012615240, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0112909f8, 0xc009df5800, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ae00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ae00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ae00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ae00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ad00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0112909f8, 0xc00bf3ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0107ea720, 0xc017da7e20, 0x73aeec0, 0xc0112909f8, 0xc00bf3ad00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:29.760034  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.760082  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.760107  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.760125  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.760141  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.760320  109070 wrap.go:47] GET /healthz: (438.782µs) 500
goroutine 36512 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0136f8a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0136f8a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01271ade0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc016b0c3c8, 0xc00e5fe900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700b00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700b00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700b00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700b00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700a00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc016b0c3c8, 0xc00f700a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01063e240, 0xc017da7e20, 0x73aeec0, 0xc016b0c3c8, 0xc00f700a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49394]
I0515 22:49:29.771729  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.771775  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.771789  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.771800  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.771809  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.772015  109070 wrap.go:47] GET /healthz: (445.894µs) 500
goroutine 36610 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0136f8bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0136f8bd0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01271b360, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc016b0c3f0, 0xc00e5ff380, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701200)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701200)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701200)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701200)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701100)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc016b0c3f0, 0xc00f701100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01063e480, 0xc017da7e20, 0x73aeec0, 0xc016b0c3f0, 0xc00f701100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:29.860037  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.860085  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.860100  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.860110  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.860121  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.860298  109070 wrap.go:47] GET /healthz: (408.916µs) 500
goroutine 36627 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0179a3b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0179a3b90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012781660, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc018320b58, 0xc013277200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfe00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfe00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfe00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfe00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfd00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc018320b58, 0xc00c0bfd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0104dc720, 0xc017da7e20, 0x73aeec0, 0xc018320b58, 0xc00c0bfd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49394]
I0515 22:49:29.871691  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.871740  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.871753  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.871762  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.871770  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.871937  109070 wrap.go:47] GET /healthz: (382.173µs) 500
goroutine 36629 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0179a3ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0179a3ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012781760, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc018320bb8, 0xc013277800, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1ea00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1ea00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1ea00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1ea00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1ea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1e900)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc018320bb8, 0xc00bd1e900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0104dcb40, 0xc017da7e20, 0x73aeec0, 0xc018320bb8, 0xc00bd1e900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:29.960131  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.960172  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.960186  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.960197  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.960206  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.960366  109070 wrap.go:47] GET /healthz: (380.311µs) 500
goroutine 36631 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0179a3e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0179a3e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012781860, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc018320c10, 0xc011bc2000, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc018320c10, 0xc00bd1f000)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc018320c10, 0xc00bd1f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc018320c10, 0xc00bd1f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc018320c10, 0xc00bd1f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc018320c10, 0xc00bd1f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc018320c10, 0xc00bd1f000)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc018320c10, 0xc00bd1f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc018320c10, 0xc00bd1f000)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc018320c10, 0xc00bd1f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc018320c10, 0xc00bd1f000)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc018320c10, 0xc00bd1f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc018320c10, 0xc00bd1ef00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc018320c10, 0xc00bd1ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0104dccc0, 0xc017da7e20, 0x73aeec0, 0xc018320c10, 0xc00bd1ef00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49394]
I0515 22:49:29.971573  109070 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0515 22:49:29.971612  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:29.971625  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:29.971635  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:29.971644  109070 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:29.971851  109070 wrap.go:47] GET /healthz: (463.093µs) 500
goroutine 36604 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0136bd9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0136bd9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0126157c0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc011290a20, 0xc015e3c000, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b300)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b300)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b300)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b300)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b200)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc011290a20, 0xc00bf3b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0107eaae0, 0xc017da7e20, 0x73aeec0, 0xc011290a20, 0xc00bf3b200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.044914  109070 client.go:354] parsed scheme: ""
I0515 22:49:30.044962  109070 client.go:354] scheme "" not registered, fallback to default scheme
I0515 22:49:30.045047  109070 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0515 22:49:30.045146  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:30.045706  109070 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0515 22:49:30.045785  109070 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0515 22:49:30.061137  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.061170  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:30.061182  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:30.061190  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:30.061369  109070 wrap.go:47] GET /healthz: (1.481588ms) 500
goroutine 36545 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003ef8150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003ef8150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0125b96a0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0071783b0, 0xc0105b2f20, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f600)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f600)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f600)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f600)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f500)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0071783b0, 0xc00f65f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc013b80420, 0xc017da7e20, 0x73aeec0, 0xc0071783b0, 0xc00f65f500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49394]
I0515 22:49:30.073253  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.073291  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:30.073303  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:30.073314  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:30.073572  109070 wrap.go:47] GET /healthz: (2.147106ms) 500
goroutine 36606 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0136bdb20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0136bdb20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0126158c0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc011290a48, 0xc01045a2c0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc011290a48, 0xc00bf3ba00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc011290a48, 0xc00bf3ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc011290a48, 0xc00bf3ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc011290a48, 0xc00bf3ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc011290a48, 0xc00bf3ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc011290a48, 0xc00bf3ba00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc011290a48, 0xc00bf3ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc011290a48, 0xc00bf3ba00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc011290a48, 0xc00bf3ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc011290a48, 0xc00bf3ba00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc011290a48, 0xc00bf3ba00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc011290a48, 0xc00bf3b900)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc011290a48, 0xc00bf3b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0107eb3e0, 0xc017da7e20, 0x73aeec0, 0xc011290a48, 0xc00bf3b900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.161646  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.161687  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:30.161700  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:30.161710  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:30.161874  109070 wrap.go:47] GET /healthz: (2.021779ms) 500
goroutine 36644 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0049a0150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0049a0150, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01251c880, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc018320d60, 0xc01045a6e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f900)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f900)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f900)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f900)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f800)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc018320d60, 0xc00bd1f800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01033a180, 0xc017da7e20, 0x73aeec0, 0xc018320d60, 0xc00bd1f800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49394]
I0515 22:49:30.173390  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.173429  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:30.173440  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:30.173475  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:30.173735  109070 wrap.go:47] GET /healthz: (2.314274ms) 500
goroutine 36608 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0136bdc00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0136bdc00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012615b60, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc011290a58, 0xc00f8962c0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc011290a58, 0xc00bf3be00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc011290a58, 0xc00bf3be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc011290a58, 0xc00bf3be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc011290a58, 0xc00bf3be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc011290a58, 0xc00bf3be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc011290a58, 0xc00bf3be00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc011290a58, 0xc00bf3be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc011290a58, 0xc00bf3be00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc011290a58, 0xc00bf3be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc011290a58, 0xc00bf3be00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc011290a58, 0xc00bf3be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc011290a58, 0xc00bf3bd00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc011290a58, 0xc00bf3bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0107ebce0, 0xc017da7e20, 0x73aeec0, 0xc011290a58, 0xc00bf3bd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.260868  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.260910  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:30.260922  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:30.260930  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:30.261115  109070 wrap.go:47] GET /healthz: (1.228922ms) 500
goroutine 36585 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0040402a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0040402a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012569bc0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0111e7528, 0xc01045ac60, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0111e7528, 0xc00f665f00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0111e7528, 0xc00f665f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0111e7528, 0xc00f665f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0111e7528, 0xc00f665f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0111e7528, 0xc00f665f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0111e7528, 0xc00f665f00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0111e7528, 0xc00f665f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0111e7528, 0xc00f665f00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0111e7528, 0xc00f665f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0111e7528, 0xc00f665f00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0111e7528, 0xc00f665f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0111e7528, 0xc00f665d00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0111e7528, 0xc00f665d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00f800300, 0xc017da7e20, 0x73aeec0, 0xc0111e7528, 0xc00f665d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49394]
I0515 22:49:30.279844  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.279882  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:30.279894  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:30.279903  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:30.280085  109070 wrap.go:47] GET /healthz: (1.239806ms) 500
goroutine 36646 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0049a0460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0049a0460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01251d780, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc018320dd0, 0xc01045b080, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1ff00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1ff00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1ff00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1ff00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1ff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1fe00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc018320dd0, 0xc00bd1fe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01033a4e0, 0xc017da7e20, 0x73aeec0, 0xc018320dd0, 0xc00bd1fe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
E0515 22:49:30.303754  109070 event.go:249] Unable to write event: 'Patch http://127.0.0.1:32979/api/v1/namespaces/permit-plugin0bc52f0e-adde-4a94-9004-89ef0a977a19/events/test-pod.159efcd7cf9116b7: dial tcp 127.0.0.1:32979: connect: connection refused' (may retry after sleeping)
I0515 22:49:30.361398  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.788697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49392]
I0515 22:49:30.361466  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.361489  109070 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0515 22:49:30.361526  109070 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0515 22:49:30.361536  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0515 22:49:30.361723  109070 wrap.go:47] GET /healthz: (1.620152ms) 500
goroutine 36374 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e877810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e877810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01243c060, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc017440230, 0xc00e57a840, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc017440230, 0xc00e81f600)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc017440230, 0xc00e81f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc017440230, 0xc00e81f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc017440230, 0xc00e81f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc017440230, 0xc00e81f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc017440230, 0xc00e81f600)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc017440230, 0xc00e81f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc017440230, 0xc00e81f600)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc017440230, 0xc00e81f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc017440230, 0xc00e81f600)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc017440230, 0xc00e81f600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc017440230, 0xc00e81f500)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc017440230, 0xc00e81f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011d53680, 0xc017da7e20, 0x73aeec0, 0xc017440230, 0xc00e81f500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49568]
I0515 22:49:30.361782  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.708987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49566]
I0515 22:49:30.362237  109070 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.804165ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.363663  109070 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.493019ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.364652  109070 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.700799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.364954  109070 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0515 22:49:30.365709  109070 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.682925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.366041  109070 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (901.399µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.366913  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.577652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49392]
I0515 22:49:30.368117  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (909.506µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49392]
I0515 22:49:30.368257  109070 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.264093ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.368618  109070 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0515 22:49:30.368643  109070 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0515 22:49:30.369274  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (795.293µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49392]
I0515 22:49:30.370718  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (971.024µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.371991  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.372004  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (865.416µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.372020  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.372232  109070 wrap.go:47] GET /healthz: (1.027868ms) 500
goroutine 36694 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012ff2700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012ff2700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012174c60, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc016300328, 0xc01230a500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc016300328, 0xc009675300)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc016300328, 0xc009675300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc016300328, 0xc009675300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc016300328, 0xc009675300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc016300328, 0xc009675300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc016300328, 0xc009675300)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc016300328, 0xc009675300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc016300328, 0xc009675300)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc016300328, 0xc009675300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc016300328, 0xc009675300)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc016300328, 0xc009675300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc016300328, 0xc009674f00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc016300328, 0xc009674f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bc4daa0, 0xc017da7e20, 0x73aeec0, 0xc016300328, 0xc009674f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.373764  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.428772ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.375962  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.864215ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.377396  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.036547ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.378589  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (833.717µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.380912  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.929168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.381120  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0515 22:49:30.382201  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (876.554µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.384641  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.87167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.384852  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0515 22:49:30.385902  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (885.164µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.387785  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.465243ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.388299  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0515 22:49:30.389665  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (962.241µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.392567  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.266627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.393045  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0515 22:49:30.405750  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (11.095744ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.409737  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.125305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.409982  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0515 22:49:30.411841  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.352676ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.458379  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (46.097151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.460619  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0515 22:49:30.461329  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.461388  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.461708  109070 wrap.go:47] GET /healthz: (1.697319ms) 500
goroutine 36717 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0040417a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0040417a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011c906c0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0111e78f8, 0xc00eb0c500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42800)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42800)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42800)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42800)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42700)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0111e78f8, 0xc00bc42700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00afb6fc0, 0xc017da7e20, 0x73aeec0, 0xc0111e78f8, 0xc00bc42700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49568]
I0515 22:49:30.465858  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (3.502833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.469135  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.43045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.469393  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0515 22:49:30.470829  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.06992ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.472927  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.472962  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.473208  109070 wrap.go:47] GET /healthz: (1.589779ms) 500
goroutine 36739 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003ef9ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003ef9ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012064ca0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc007178770, 0xc005da4780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc007178770, 0xc008d33300)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc007178770, 0xc008d33300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc007178770, 0xc008d33300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc007178770, 0xc008d33300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc007178770, 0xc008d33300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc007178770, 0xc008d33300)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc007178770, 0xc008d33300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc007178770, 0xc008d33300)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc007178770, 0xc008d33300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc007178770, 0xc008d33300)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc007178770, 0xc008d33300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc007178770, 0xc008d33200)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc007178770, 0xc008d33200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b74ade0, 0xc017da7e20, 0x73aeec0, 0xc007178770, 0xc008d33200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.473633  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.04794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.473883  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0515 22:49:30.475034  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (980.779µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.483261  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.118018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.483993  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0515 22:49:30.487521  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (3.226926ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.490259  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.22833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.490763  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0515 22:49:30.491999  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (985.149µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.494040  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.557478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.494594  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0515 22:49:30.496052  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.186889ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.499162  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.513259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.499982  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0515 22:49:30.501222  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.050409ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.503725  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.986191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.504151  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0515 22:49:30.505896  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.294732ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.508644  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.385639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.509423  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0515 22:49:30.510919  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.189803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.514402  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.972191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.514880  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0515 22:49:30.516096  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (939.431µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.521113  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.362805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.521434  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0515 22:49:30.523776  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.745905ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.525802  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.564919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.525992  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0515 22:49:30.531410  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (5.188609ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.534337  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.3824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.534736  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0515 22:49:30.536022  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.062312ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.538124  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.679876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.538376  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0515 22:49:30.539612  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (973.391µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.541778  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.719519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.542023  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0515 22:49:30.543136  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (907.892µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.545697  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.083012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.545973  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0515 22:49:30.547462  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.170109ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.551852  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.772074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.552105  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0515 22:49:30.553688  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.336683ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.556064  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.776088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.556374  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0515 22:49:30.557938  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.222131ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.560091  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.6989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.560362  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0515 22:49:30.560649  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.560711  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.560892  109070 wrap.go:47] GET /healthz: (1.151303ms) 500
goroutine 36779 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005639110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005639110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0109a05c0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0112911d8, 0xc005da4f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0112911d8, 0xc00a069f00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0112911d8, 0xc00a069f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0112911d8, 0xc00a069f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0112911d8, 0xc00a069f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0112911d8, 0xc00a069f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0112911d8, 0xc00a069f00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0112911d8, 0xc00a069f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0112911d8, 0xc00a069f00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0112911d8, 0xc00a069f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0112911d8, 0xc00a069f00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0112911d8, 0xc00a069f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0112911d8, 0xc00a069e00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0112911d8, 0xc00a069e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c11bb60, 0xc017da7e20, 0x73aeec0, 0xc0112911d8, 0xc00a069e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49568]
I0515 22:49:30.562313  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.685257ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.564230  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.464231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.564920  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0515 22:49:30.566469  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.306679ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.568979  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.842003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.569344  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0515 22:49:30.570672  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.069662ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.572906  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.572979  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.573190  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.832685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.573384  109070 wrap.go:47] GET /healthz: (1.484973ms) 500
goroutine 36792 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0055f11f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0055f11f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01093d660, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0105220a0, 0xc00eb0ca00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3900)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3900)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3900)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3900)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3800)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0105220a0, 0xc00b3e3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bbb83c0, 0xc017da7e20, 0x73aeec0, 0xc0105220a0, 0xc00b3e3800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.573514  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0515 22:49:30.574765  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.037237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.577090  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.906706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.577483  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0515 22:49:30.578687  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (966.688µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.580627  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.521032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.580857  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0515 22:49:30.581983  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (825.15µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.583960  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.534937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.584287  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0515 22:49:30.585397  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (898.962µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.588176  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.228076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.588478  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0515 22:49:30.590062  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.267262ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.592947  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.504579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.593205  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0515 22:49:30.594908  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.40962ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.597236  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.775386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.597467  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0515 22:49:30.598672  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (959.281µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.600970  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.698252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.601540  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0515 22:49:30.602770  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (920.288µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.604722  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.476293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.605013  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0515 22:49:30.606148  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (841.342µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.608619  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.886387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.608916  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0515 22:49:30.611070  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.853265ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.619411  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.637736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.619752  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0515 22:49:30.644346  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (22.08976ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.651790  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.689633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.652437  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0515 22:49:30.654893  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (2.038123ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.658257  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.684739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.659422  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0515 22:49:30.660970  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.661014  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.661204  109070 wrap.go:47] GET /healthz: (1.357009ms) 500
goroutine 36833 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005862930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005862930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0106565a0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc017440b50, 0xc0100aaf00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc017440b50, 0xc0063aff00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc017440b50, 0xc0063aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc017440b50, 0xc0063aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc017440b50, 0xc0063aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc017440b50, 0xc0063aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc017440b50, 0xc0063aff00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc017440b50, 0xc0063aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc017440b50, 0xc0063aff00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc017440b50, 0xc0063aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc017440b50, 0xc0063aff00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc017440b50, 0xc0063aff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc017440b50, 0xc0063afe00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc017440b50, 0xc0063afe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b3a13e0, 0xc017da7e20, 0x73aeec0, 0xc017440b50, 0xc0063afe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49568]
I0515 22:49:30.661795  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.144429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.664161  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.900543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.664687  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0515 22:49:30.666342  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.33007ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.668893  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.906941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.669136  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0515 22:49:30.670377  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (995.843µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.672477  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.672589  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.672841  109070 wrap.go:47] GET /healthz: (1.597082ms) 500
goroutine 36930 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005449730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005449730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010f814e0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc007178c30, 0xc0135043c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc007178c30, 0xc004698300)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc007178c30, 0xc004698300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc007178c30, 0xc004698300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc007178c30, 0xc004698300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc007178c30, 0xc004698300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc007178c30, 0xc004698300)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc007178c30, 0xc004698300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc007178c30, 0xc004698300)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc007178c30, 0xc004698300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc007178c30, 0xc004698300)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc007178c30, 0xc004698300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc007178c30, 0xc004698200)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc007178c30, 0xc004698200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c567ec0, 0xc017da7e20, 0x73aeec0, 0xc007178c30, 0xc004698200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.673055  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.06997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.673577  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0515 22:49:30.674750  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (900.906µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.677117  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.850224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.677396  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0515 22:49:30.678654  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (999.686µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.680626  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.532079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.680892  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0515 22:49:30.681999  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (864.677µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.683944  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.544659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.684300  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0515 22:49:30.685615  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (978.069µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.687889  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.836576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.688137  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0515 22:49:30.689379  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.03118ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.693200  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.719525ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.693541  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0515 22:49:30.695108  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.146317ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.697365  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.701946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.697646  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0515 22:49:30.699112  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.020765ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.701402  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.625871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.701736  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0515 22:49:30.703341  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.24092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.705406  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.592269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.705865  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0515 22:49:30.707186  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.040928ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.709302  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.629159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.709598  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0515 22:49:30.710838  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (876.766µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.713339  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.044427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.714312  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0515 22:49:30.716212  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.494894ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.719852  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.935519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.720166  109070 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0515 22:49:30.721602  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.17029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.724223  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.996544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.724490  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0515 22:49:30.725765  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.034966ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.741680  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.868177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.742070  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0515 22:49:30.763181  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.763227  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.763801  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (3.692585ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.763861  109070 wrap.go:47] GET /healthz: (3.991249ms) 500
goroutine 36956 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005a4c310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005a4c310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01045f740, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc010522900, 0xc01230ab40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc010522900, 0xc0040db000)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc010522900, 0xc0040db000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc010522900, 0xc0040db000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc010522900, 0xc0040db000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc010522900, 0xc0040db000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc010522900, 0xc0040db000)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc010522900, 0xc0040db000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc010522900, 0xc0040db000)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc010522900, 0xc0040db000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc010522900, 0xc0040db000)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc010522900, 0xc0040db000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc010522900, 0xc0040daf00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc010522900, 0xc0040daf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006e1ce40, 0xc017da7e20, 0x73aeec0, 0xc010522900, 0xc0040daf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49568]
I0515 22:49:30.772922  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.772965  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.773436  109070 wrap.go:47] GET /healthz: (2.047803ms) 500
goroutine 36958 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005a4c3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005a4c3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01045f980, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc010522930, 0xc00b1e6640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc010522930, 0xc0040db700)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc010522930, 0xc0040db700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc010522930, 0xc0040db700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc010522930, 0xc0040db700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc010522930, 0xc0040db700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc010522930, 0xc0040db700)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc010522930, 0xc0040db700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc010522930, 0xc0040db700)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc010522930, 0xc0040db700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc010522930, 0xc0040db700)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc010522930, 0xc0040db700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc010522930, 0xc0040db600)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc010522930, 0xc0040db600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006e1d320, 0xc017da7e20, 0x73aeec0, 0xc010522930, 0xc0040db600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.781105  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.463374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.781372  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0515 22:49:30.897075  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.897112  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.897276  109070 wrap.go:47] GET /healthz: (2.477016ms) 500
goroutine 36960 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005a4c5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005a4c5b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01045fdc0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc010522970, 0xc0135048c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc010522970, 0xc0014d2b00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc010522970, 0xc0014d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc010522970, 0xc0014d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc010522970, 0xc0014d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc010522970, 0xc0014d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc010522970, 0xc0014d2b00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc010522970, 0xc0014d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc010522970, 0xc0014d2b00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc010522970, 0xc0014d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc010522970, 0xc0014d2b00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc010522970, 0xc0014d2b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc010522970, 0xc0014d2900)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc010522970, 0xc0014d2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006e1d740, 0xc017da7e20, 0x73aeec0, 0xc010522970, 0xc0014d2900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49394]
I0515 22:49:30.897606  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (2.33082ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.900399  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.900424  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.900628  109070 wrap.go:47] GET /healthz: (1.955386ms) 500
goroutine 36889 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0057f7420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0057f7420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0105949a0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc011d54b70, 0xc005da5540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc011d54b70, 0xc006691a00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc011d54b70, 0xc006691a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc011d54b70, 0xc006691a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc011d54b70, 0xc006691a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc011d54b70, 0xc006691a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc011d54b70, 0xc006691a00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc011d54b70, 0xc006691a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc011d54b70, 0xc006691a00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc011d54b70, 0xc006691a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc011d54b70, 0xc006691a00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc011d54b70, 0xc006691a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc011d54b70, 0xc006691900)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc011d54b70, 0xc006691900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00aa04c00, 0xc017da7e20, 0x73aeec0, 0xc011d54b70, 0xc006691900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.902517  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.943538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49394]
I0515 22:49:30.902801  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0515 22:49:30.904220  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.146976ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.906857  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.87571ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49568]
I0515 22:49:30.907271  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0515 22:49:30.908623  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.086743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:30.911137  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.083293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:30.911430  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0515 22:49:30.920093  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.391439ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:30.941252  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.497328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:30.942055  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0515 22:49:30.960985  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.563143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:30.961990  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.962024  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.962196  109070 wrap.go:47] GET /healthz: (1.398048ms) 500
goroutine 36999 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005a4cee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005a4cee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0102af160, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc010522ab0, 0xc013505180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3e00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3e00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3e00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3e00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3d00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc010522ab0, 0xc0014d3d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0069f9ce0, 0xc017da7e20, 0x73aeec0, 0xc010522ab0, 0xc0014d3d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49584]
I0515 22:49:30.973222  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:30.973263  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:30.973433  109070 wrap.go:47] GET /healthz: (1.549365ms) 500
goroutine 37019 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0059ff420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0059ff420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010268a20, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0174410e8, 0xc016d24500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0174410e8, 0xc000b99500)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0174410e8, 0xc000b99500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0174410e8, 0xc000b99500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0174410e8, 0xc000b99500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0174410e8, 0xc000b99500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0174410e8, 0xc000b99500)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0174410e8, 0xc000b99500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0174410e8, 0xc000b99500)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0174410e8, 0xc000b99500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0174410e8, 0xc000b99500)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0174410e8, 0xc000b99500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0174410e8, 0xc000b99400)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0174410e8, 0xc000b99400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006888c60, 0xc017da7e20, 0x73aeec0, 0xc0174410e8, 0xc000b99400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:30.983412  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.560172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:30.983724  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0515 22:49:31.000322  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.416399ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.021213  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.392898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.021479  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0515 22:49:31.040324  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.493542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.061712  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.945626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.062078  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0515 22:49:31.063031  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.063056  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.063215  109070 wrap.go:47] GET /healthz: (2.209838ms) 500
goroutine 37027 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005d38f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005d38f50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0101f6800, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc017974b30, 0xc0100ab540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc017974b30, 0xc000b3e700)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc017974b30, 0xc000b3e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc017974b30, 0xc000b3e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc017974b30, 0xc000b3e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc017974b30, 0xc000b3e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc017974b30, 0xc000b3e700)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc017974b30, 0xc000b3e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc017974b30, 0xc000b3e700)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc017974b30, 0xc000b3e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc017974b30, 0xc000b3e700)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc017974b30, 0xc000b3e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc017974b30, 0xc00286bf00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc017974b30, 0xc00286bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00644df20, 0xc017da7e20, 0x73aeec0, 0xc017974b30, 0xc00286bf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49582]
I0515 22:49:31.072716  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.072756  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.072931  109070 wrap.go:47] GET /healthz: (1.460565ms) 500
goroutine 37029 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005d392d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005d392d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0101f6e40, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc017974ba8, 0xc00eb0d040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc017974ba8, 0xc000b3f000)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc017974ba8, 0xc000b3f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc017974ba8, 0xc000b3f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc017974ba8, 0xc000b3f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc017974ba8, 0xc000b3f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc017974ba8, 0xc000b3f000)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc017974ba8, 0xc000b3f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc017974ba8, 0xc000b3f000)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc017974ba8, 0xc000b3f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc017974ba8, 0xc000b3f000)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc017974ba8, 0xc000b3f000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc017974ba8, 0xc000b3ef00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc017974ba8, 0xc000b3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00637a060, 0xc017da7e20, 0x73aeec0, 0xc017974ba8, 0xc000b3ef00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.080408  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.647602ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.101303  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.547866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.101610  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0515 22:49:31.123228  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.565859ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.141073  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.379372ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.141319  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0515 22:49:31.160059  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.335592ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.160565  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.160595  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.160791  109070 wrap.go:47] GET /healthz: (1.031459ms) 500
goroutine 37001 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005a4d180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005a4d180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0102af9c0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc010522b48, 0xc013505900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc010522b48, 0xc002e0a000)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc010522b48, 0xc002e0a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc010522b48, 0xc002e0a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc010522b48, 0xc002e0a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc010522b48, 0xc002e0a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc010522b48, 0xc002e0a000)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc010522b48, 0xc002e0a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc010522b48, 0xc002e0a000)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc010522b48, 0xc002e0a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc010522b48, 0xc002e0a000)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc010522b48, 0xc002e0a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc010522b48, 0xc0004e5900)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc010522b48, 0xc0004e5900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0068d7380, 0xc017da7e20, 0x73aeec0, 0xc010522b48, 0xc0004e5900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49584]
I0515 22:49:31.172668  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.172713  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.172954  109070 wrap.go:47] GET /healthz: (1.476173ms) 500
goroutine 36990 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005a6aee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005a6aee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01015e660, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc016300d80, 0xc00eb0d680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc016300d80, 0xc002dbaa00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc016300d80, 0xc002dbaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc016300d80, 0xc002dbaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc016300d80, 0xc002dbaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc016300d80, 0xc002dbaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc016300d80, 0xc002dbaa00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc016300d80, 0xc002dbaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc016300d80, 0xc002dbaa00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc016300d80, 0xc002dbaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc016300d80, 0xc002dbaa00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc016300d80, 0xc002dbaa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc016300d80, 0xc002dba900)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc016300d80, 0xc002dba900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005301080, 0xc017da7e20, 0x73aeec0, 0xc016300d80, 0xc002dba900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.180792  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.900231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.181050  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0515 22:49:31.200193  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.387468ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.222206  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.206894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.222578  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0515 22:49:31.240270  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.424559ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.260967  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.261003  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.261193  109070 wrap.go:47] GET /healthz: (1.509903ms) 500
goroutine 37069 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005e96a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005e96a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01009c320, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc017441420, 0xc01230b040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc017441420, 0xc002ef6000)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc017441420, 0xc002ef6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc017441420, 0xc002ef6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc017441420, 0xc002ef6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc017441420, 0xc002ef6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc017441420, 0xc002ef6000)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc017441420, 0xc002ef6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc017441420, 0xc002ef6000)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc017441420, 0xc002ef6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc017441420, 0xc002ef6000)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc017441420, 0xc002ef6000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc017441420, 0xc0014c5f00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc017441420, 0xc0014c5f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005ac4000, 0xc017da7e20, 0x73aeec0, 0xc017441420, 0xc0014c5f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49582]
I0515 22:49:31.261862  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.137323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.262289  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0515 22:49:31.272735  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.272964  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.273252  109070 wrap.go:47] GET /healthz: (1.87966ms) 500
goroutine 37050 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005c9d960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005c9d960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010124d40, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc016b0d370, 0xc016d24c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc016b0d370, 0xc0013cfa00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc016b0d370, 0xc0013cfa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc016b0d370, 0xc0013cfa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc016b0d370, 0xc0013cfa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc016b0d370, 0xc0013cfa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc016b0d370, 0xc0013cfa00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc016b0d370, 0xc0013cfa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc016b0d370, 0xc0013cfa00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc016b0d370, 0xc0013cfa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc016b0d370, 0xc0013cfa00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc016b0d370, 0xc0013cfa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc016b0d370, 0xc0013cf900)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc016b0d370, 0xc0013cf900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005cb1380, 0xc017da7e20, 0x73aeec0, 0xc016b0d370, 0xc0013cf900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.286688  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (7.876805ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.300977  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.258472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.301205  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0515 22:49:31.322976  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.580099ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.341562  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.770429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.341959  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0515 22:49:31.360052  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.364778ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.360754  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.360779  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.360933  109070 wrap.go:47] GET /healthz: (1.119694ms) 500
goroutine 37098 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005e97030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005e97030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01009cd00, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0174414f8, 0xc016d252c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6b00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6b00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6b00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6b00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6a00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0174414f8, 0xc002ef6a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00564ac00, 0xc017da7e20, 0x73aeec0, 0xc0174414f8, 0xc002ef6a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49582]
I0515 22:49:31.372748  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.372791  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.372960  109070 wrap.go:47] GET /healthz: (1.518227ms) 500
goroutine 37139 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005a4dab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005a4dab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01010b020, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc010522cb8, 0xc00fc30000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc010522cb8, 0xc002e0b200)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc010522cb8, 0xc002e0b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc010522cb8, 0xc002e0b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc010522cb8, 0xc002e0b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc010522cb8, 0xc002e0b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc010522cb8, 0xc002e0b200)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc010522cb8, 0xc002e0b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc010522cb8, 0xc002e0b200)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc010522cb8, 0xc002e0b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc010522cb8, 0xc002e0b200)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc010522cb8, 0xc002e0b200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc010522cb8, 0xc002e0af00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc010522cb8, 0xc002e0af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0044ecc60, 0xc017da7e20, 0x73aeec0, 0xc010522cb8, 0xc002e0af00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.380717  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.086239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.380997  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0515 22:49:31.400539  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.743128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.420814  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.107562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.421269  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0515 22:49:31.440752  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.295169ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.462122  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.462377  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.462707  109070 wrap.go:47] GET /healthz: (2.364877ms) 500
goroutine 37100 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005e973b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005e973b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01009d2e0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc017441548, 0xc016d257c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc017441548, 0xc002ef7000)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc017441548, 0xc002ef7000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc017441548, 0xc002ef7000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc017441548, 0xc002ef7000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc017441548, 0xc002ef7000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc017441548, 0xc002ef7000)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc017441548, 0xc002ef7000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc017441548, 0xc002ef7000)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc017441548, 0xc002ef7000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc017441548, 0xc002ef7000)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc017441548, 0xc002ef7000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc017441548, 0xc002ef6f00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc017441548, 0xc002ef6f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00564bce0, 0xc017da7e20, 0x73aeec0, 0xc017441548, 0xc002ef6f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49584]
I0515 22:49:31.465032  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.237164ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.465570  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0515 22:49:31.472829  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.472935  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.473195  109070 wrap.go:47] GET /healthz: (1.793811ms) 500
goroutine 37149 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005f60310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005f60310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01001e500, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc010522dc0, 0xc016d25cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc300)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc300)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc300)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc300)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc200)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc010522dc0, 0xc0032dc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc003b0db60, 0xc017da7e20, 0x73aeec0, 0xc010522dc0, 0xc0032dc200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.482530  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.735672ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.501535  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.758078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.501964  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0515 22:49:31.520458  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.713506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.541327  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.071059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.541833  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0515 22:49:31.560397  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.673871ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.560894  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.560927  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.561070  109070 wrap.go:47] GET /healthz: (1.281686ms) 500
goroutine 37187 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005df29a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005df29a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010069e20, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc016b0d570, 0xc003d32500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff300)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff300)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff300)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff300)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff200)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc016b0d570, 0xc0033ff200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007acb3e0, 0xc017da7e20, 0x73aeec0, 0xc016b0d570, 0xc0033ff200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49584]
I0515 22:49:31.572641  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.572694  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.572941  109070 wrap.go:47] GET /healthz: (1.48401ms) 500
goroutine 37151 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005f603f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005f603f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01001e920, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc010522dd0, 0xc003c46000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc700)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc700)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc700)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc700)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc600)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc010522dd0, 0xc0032dc600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00574d5c0, 0xc017da7e20, 0x73aeec0, 0xc010522dd0, 0xc0032dc600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.581062  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.440298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.581342  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0515 22:49:31.600780  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.979425ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.621273  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.562187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.621658  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0515 22:49:31.640581  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.78424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.661411  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.661542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.661442  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.661963  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.662237  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0515 22:49:31.662265  109070 wrap.go:47] GET /healthz: (2.420762ms) 500
goroutine 37219 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00524e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00524e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ffdcf00, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc017441788, 0xc0030c6000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc017441788, 0xc00339f500)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc017441788, 0xc00339f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc017441788, 0xc00339f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc017441788, 0xc00339f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc017441788, 0xc00339f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc017441788, 0xc00339f500)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc017441788, 0xc00339f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc017441788, 0xc00339f500)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc017441788, 0xc00339f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc017441788, 0xc00339f500)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc017441788, 0xc00339f500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc017441788, 0xc00339f300)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc017441788, 0xc00339f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005b41ce0, 0xc017da7e20, 0x73aeec0, 0xc017441788, 0xc00339f300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49582]
I0515 22:49:31.672705  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.672740  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.672914  109070 wrap.go:47] GET /healthz: (1.45409ms) 500
goroutine 37165 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005ae8770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005ae8770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ff3abe0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc007179630, 0xc003d32b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc007179630, 0xc003fc4f00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc007179630, 0xc003fc4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc007179630, 0xc003fc4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc007179630, 0xc003fc4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc007179630, 0xc003fc4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc007179630, 0xc003fc4f00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc007179630, 0xc003fc4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc007179630, 0xc003fc4f00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc007179630, 0xc003fc4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc007179630, 0xc003fc4f00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc007179630, 0xc003fc4f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc007179630, 0xc003fc4e00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc007179630, 0xc003fc4e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008a95860, 0xc017da7e20, 0x73aeec0, 0xc007179630, 0xc003fc4e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.680380  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.660059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.701861  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.069683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.702154  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0515 22:49:31.720861  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (2.105212ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.741685  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.938413ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.741951  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0515 22:49:31.762581  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.762623  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.762793  109070 wrap.go:47] GET /healthz: (1.064376ms) 500
goroutine 37169 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005ae8d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005ae8d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ff3b880, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc007179730, 0xc0037063c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc007179730, 0xc003fd8800)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc007179730, 0xc003fd8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc007179730, 0xc003fd8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc007179730, 0xc003fd8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc007179730, 0xc003fd8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc007179730, 0xc003fd8800)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc007179730, 0xc003fd8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc007179730, 0xc003fd8800)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc007179730, 0xc003fd8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc007179730, 0xc003fd8800)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc007179730, 0xc003fd8800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc007179730, 0xc003fd8700)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc007179730, 0xc003fd8700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008fccb40, 0xc017da7e20, 0x73aeec0, 0xc007179730, 0xc003fd8700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49584]
I0515 22:49:31.763731  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.001648ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.775157  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.775201  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.775379  109070 wrap.go:47] GET /healthz: (2.676963ms) 500
goroutine 37221 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00524e7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00524e7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00ffdd980, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc017441808, 0xc003d33180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc017441808, 0xc005536200)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc017441808, 0xc005536200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc017441808, 0xc005536200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc017441808, 0xc005536200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc017441808, 0xc005536200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc017441808, 0xc005536200)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc017441808, 0xc005536200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc017441808, 0xc005536200)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc017441808, 0xc005536200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc017441808, 0xc005536200)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc017441808, 0xc005536200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc017441808, 0xc005536000)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc017441808, 0xc005536000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0097fe2a0, 0xc017da7e20, 0x73aeec0, 0xc017441808, 0xc005536000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.782700  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.917518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.783011  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0515 22:49:31.800180  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.408919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.820935  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.169506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.821196  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0515 22:49:31.840259  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.545647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.861470  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.622973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.862098  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.862461  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.862684  109070 wrap.go:47] GET /healthz: (2.985043ms) 500
goroutine 37079 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005a6b650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005a6b650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01015f920, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc016300f38, 0xc003706b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc016300f38, 0xc005de9400)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc016300f38, 0xc005de9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc016300f38, 0xc005de9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc016300f38, 0xc005de9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc016300f38, 0xc005de9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc016300f38, 0xc005de9400)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc016300f38, 0xc005de9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc016300f38, 0xc005de9400)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc016300f38, 0xc005de9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc016300f38, 0xc005de9400)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc016300f38, 0xc005de9400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc016300f38, 0xc005de8f00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc016300f38, 0xc005de8f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00460bc20, 0xc017da7e20, 0x73aeec0, 0xc016300f38, 0xc005de8f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49584]
I0515 22:49:31.863201  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0515 22:49:31.873129  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.873169  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.873344  109070 wrap.go:47] GET /healthz: (1.803503ms) 500
goroutine 37223 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00524ee00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00524ee00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fe364e0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0174418c8, 0xc00fc30a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0174418c8, 0xc005537b00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0174418c8, 0xc005537b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0174418c8, 0xc005537b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0174418c8, 0xc005537b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0174418c8, 0xc005537b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0174418c8, 0xc005537b00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0174418c8, 0xc005537b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0174418c8, 0xc005537b00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0174418c8, 0xc005537b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0174418c8, 0xc005537b00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0174418c8, 0xc005537b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0174418c8, 0xc005537400)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0174418c8, 0xc005537400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0097fef60, 0xc017da7e20, 0x73aeec0, 0xc0174418c8, 0xc005537400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.885221  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (6.236955ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.901359  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.602836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.901690  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0515 22:49:31.920981  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.961323ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.941185  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.431872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.941483  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0515 22:49:31.960195  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.448563ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:31.960809  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.960838  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.961007  109070 wrap.go:47] GET /healthz: (988.033µs) 500
goroutine 37211 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005f61f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005f61f10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fdc6b20, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc010523118, 0xc003c46c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc010523118, 0xc005f63300)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc010523118, 0xc005f63300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc010523118, 0xc005f63300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc010523118, 0xc005f63300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc010523118, 0xc005f63300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc010523118, 0xc005f63300)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc010523118, 0xc005f63300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc010523118, 0xc005f63300)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc010523118, 0xc005f63300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc010523118, 0xc005f63300)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc010523118, 0xc005f63300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc010523118, 0xc005f63200)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc010523118, 0xc005f63200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b2f4840, 0xc017da7e20, 0x73aeec0, 0xc010523118, 0xc005f63200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49584]
I0515 22:49:31.972523  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:31.972562  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:31.972728  109070 wrap.go:47] GET /healthz: (1.296152ms) 500
goroutine 37261 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005c1cb60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005c1cb60, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fe0f960, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0179754b0, 0xc005da5cc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0179754b0, 0xc004996300)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0179754b0, 0xc004996300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0179754b0, 0xc004996300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0179754b0, 0xc004996300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0179754b0, 0xc004996300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0179754b0, 0xc004996300)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0179754b0, 0xc004996300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0179754b0, 0xc004996300)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0179754b0, 0xc004996300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0179754b0, 0xc004996300)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0179754b0, 0xc004996300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0179754b0, 0xc004996200)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0179754b0, 0xc004996200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b5ce300, 0xc017da7e20, 0x73aeec0, 0xc0179754b0, 0xc004996200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.981089  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.392641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:31.981391  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0515 22:49:32.000597  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.802114ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.022048  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.208251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.022296  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0515 22:49:32.040380  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.650835ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.066099  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.360262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.067380  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.067406  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.067602  109070 wrap.go:47] GET /healthz: (2.180113ms) 500
goroutine 37265 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005c1d1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005c1d1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fd58960, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0179755b0, 0xc0030c6b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0179755b0, 0xc004997500)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0179755b0, 0xc004997500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0179755b0, 0xc004997500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0179755b0, 0xc004997500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0179755b0, 0xc004997500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0179755b0, 0xc004997500)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0179755b0, 0xc004997500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0179755b0, 0xc004997500)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0179755b0, 0xc004997500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0179755b0, 0xc004997500)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0179755b0, 0xc004997500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0179755b0, 0xc004997400)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0179755b0, 0xc004997400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b5ce960, 0xc017da7e20, 0x73aeec0, 0xc0179755b0, 0xc004997400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49582]
I0515 22:49:32.067706  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0515 22:49:32.072356  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.072391  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.072671  109070 wrap.go:47] GET /healthz: (1.334315ms) 500
goroutine 36896 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0057f7e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0057f7e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010595fc0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc011d54c98, 0xc003206c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc011d54c98, 0xc005664c00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc011d54c98, 0xc005664c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc011d54c98, 0xc005664c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc011d54c98, 0xc005664c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc011d54c98, 0xc005664c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc011d54c98, 0xc005664c00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc011d54c98, 0xc005664c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc011d54c98, 0xc005664c00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc011d54c98, 0xc005664c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc011d54c98, 0xc005664c00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc011d54c98, 0xc005664c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc011d54c98, 0xc005664b00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc011d54c98, 0xc005664b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00aa05ec0, 0xc017da7e20, 0x73aeec0, 0xc011d54c98, 0xc005664b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.080210  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.492827ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.100919  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.214478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.101135  109070 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0515 22:49:32.120242  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.512028ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.122385  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.675987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.140956  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.175394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.141231  109070 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0515 22:49:32.160280  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.444499ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.160669  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.160697  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.161685  109070 wrap.go:47] GET /healthz: (1.958108ms) 500
goroutine 37292 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0092d70a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0092d70a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fd63c40, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0105233c0, 0xc00fc30f00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0105233c0, 0xc004c83200)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0105233c0, 0xc004c83200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0105233c0, 0xc004c83200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0105233c0, 0xc004c83200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0105233c0, 0xc004c83200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0105233c0, 0xc004c83200)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0105233c0, 0xc004c83200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0105233c0, 0xc004c83200)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0105233c0, 0xc004c83200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0105233c0, 0xc004c83200)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0105233c0, 0xc004c83200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0105233c0, 0xc004c83100)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0105233c0, 0xc004c83100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bc8c8a0, 0xc017da7e20, 0x73aeec0, 0xc0105233c0, 0xc004c83100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49584]
I0515 22:49:32.162019  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.232263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.172796  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.172842  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.173010  109070 wrap.go:47] GET /healthz: (1.508864ms) 500
goroutine 37301 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00935c000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00935c000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c5f09c0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc017975700, 0xc003707400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc017975700, 0xc004fc4900)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc017975700, 0xc004fc4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc017975700, 0xc004fc4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc017975700, 0xc004fc4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc017975700, 0xc004fc4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc017975700, 0xc004fc4900)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc017975700, 0xc004fc4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc017975700, 0xc004fc4900)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc017975700, 0xc004fc4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc017975700, 0xc004fc4900)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc017975700, 0xc004fc4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc017975700, 0xc004fc4600)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc017975700, 0xc004fc4600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b5cf1a0, 0xc017da7e20, 0x73aeec0, 0xc017975700, 0xc004fc4600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.181091  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.414877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.181346  109070 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0515 22:49:32.200282  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.484043ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.202518  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.512425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.221030  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.207987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.221265  109070 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0515 22:49:32.240633  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.846125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.242919  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.810665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.260843  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.260880  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.261042  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.320591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.261048  109070 wrap.go:47] GET /healthz: (1.266255ms) 500
goroutine 37320 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00934aaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00934aaf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c932320, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc011d54ee0, 0xc01230b900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f9600)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f9600)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f9600)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f9600)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f9600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f8f00)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc011d54ee0, 0xc0011f8f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bc8aea0, 0xc017da7e20, 0x73aeec0, 0xc011d54ee0, 0xc0011f8f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49584]
I0515 22:49:32.261261  109070 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0515 22:49:32.272854  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.273025  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.273268  109070 wrap.go:47] GET /healthz: (1.763745ms) 500
goroutine 37268 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0054fa540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0054fa540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fd1c0e0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc016b0da70, 0xc00fc31400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb200)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb200)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb200)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb200)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb100)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc016b0da70, 0xc0055eb100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bb2a3c0, 0xc017da7e20, 0x73aeec0, 0xc016b0da70, 0xc0055eb100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.280311  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.60144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.286125  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.879936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.301706  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.92091ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.301954  109070 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0515 22:49:32.321274  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (2.472875ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.323620  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.424703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.341807  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.050027ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.342071  109070 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0515 22:49:32.361165  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.361203  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.361289  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.568962ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.361393  109070 wrap.go:47] GET /healthz: (1.269933ms) 500
goroutine 37326 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00934b880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00934b880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c933e00, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc011d55010, 0xc00fc31900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc011d55010, 0xc0094e3a00)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc011d55010, 0xc0094e3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc011d55010, 0xc0094e3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc011d55010, 0xc0094e3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc011d55010, 0xc0094e3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc011d55010, 0xc0094e3a00)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc011d55010, 0xc0094e3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc011d55010, 0xc0094e3a00)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc011d55010, 0xc0094e3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc011d55010, 0xc0094e3a00)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc011d55010, 0xc0094e3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc011d55010, 0xc0094e3900)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc011d55010, 0xc0094e3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bc8bec0, 0xc017da7e20, 0x73aeec0, 0xc011d55010, 0xc0094e3900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49582]
I0515 22:49:32.363483  109070 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.296523ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.373128  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.373163  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.373340  109070 wrap.go:47] GET /healthz: (1.436315ms) 500
goroutine 37240 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc005ae9570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc005ae9570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fea8dc0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc007179860, 0xc003207400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc007179860, 0xc0097d4300)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc007179860, 0xc0097d4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc007179860, 0xc0097d4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc007179860, 0xc0097d4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc007179860, 0xc0097d4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc007179860, 0xc0097d4300)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc007179860, 0xc0097d4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc007179860, 0xc0097d4300)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc007179860, 0xc0097d4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc007179860, 0xc0097d4300)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc007179860, 0xc0097d4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc007179860, 0xc0097d4100)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc007179860, 0xc0097d4100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c09a060, 0xc017da7e20, 0x73aeec0, 0xc007179860, 0xc0097d4100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.381274  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.578828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.381628  109070 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0515 22:49:32.400627  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.719469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.402585  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.317671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.421572  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.71595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.422174  109070 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0515 22:49:32.440647  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.73669ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.443207  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.719994ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.461060  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.348347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.461342  109070 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0515 22:49:32.461535  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.461555  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.461737  109070 wrap.go:47] GET /healthz: (1.614462ms) 500
goroutine 37362 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009368700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009368700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fce79c0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc0163010f8, 0xc003d33680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0500)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0500)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0500)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0500)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0400)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc0163010f8, 0xc0080a0400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a171020, 0xc017da7e20, 0x73aeec0, 0xc0163010f8, 0xc0080a0400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49584]
I0515 22:49:32.472712  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.472751  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.472906  109070 wrap.go:47] GET /healthz: (1.411946ms) 500
goroutine 37364 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0093687e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0093687e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fce7ba0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc016301108, 0xc01230be00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc016301108, 0xc0080a0900)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc016301108, 0xc0080a0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc016301108, 0xc0080a0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc016301108, 0xc0080a0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc016301108, 0xc0080a0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc016301108, 0xc0080a0900)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc016301108, 0xc0080a0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc016301108, 0xc0080a0900)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc016301108, 0xc0080a0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc016301108, 0xc0080a0900)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc016301108, 0xc0080a0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc016301108, 0xc0080a0800)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc016301108, 0xc0080a0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a171320, 0xc017da7e20, 0x73aeec0, 0xc016301108, 0xc0080a0800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.480271  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.504012ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.489145  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (8.322784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.501843  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.729352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.502106  109070 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0515 22:49:32.520474  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.475111ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.522698  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.559089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.543410  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.647797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.543925  109070 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0515 22:49:32.560208  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.430613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.560989  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.561025  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.561193  109070 wrap.go:47] GET /healthz: (1.027624ms) 500
goroutine 37383 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0093790a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0093790a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c5ef760, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc017441db0, 0xc003185040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc017441db0, 0xc00aa41700)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc017441db0, 0xc00aa41700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc017441db0, 0xc00aa41700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc017441db0, 0xc00aa41700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc017441db0, 0xc00aa41700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc017441db0, 0xc00aa41700)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc017441db0, 0xc00aa41700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc017441db0, 0xc00aa41700)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc017441db0, 0xc00aa41700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc017441db0, 0xc00aa41700)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc017441db0, 0xc00aa41700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc017441db0, 0xc00aa41600)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc017441db0, 0xc00aa41600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0099b8540, 0xc017da7e20, 0x73aeec0, 0xc017441db0, 0xc00aa41600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49582]
I0515 22:49:32.562227  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.353806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.572744  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.572778  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.572947  109070 wrap.go:47] GET /healthz: (1.417482ms) 500
goroutine 37279 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0054fbc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0054fbc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c12af80, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc00fbf0160, 0xc003185680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc00fbf0160, 0xc002670400)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc00fbf0160, 0xc002670400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc00fbf0160, 0xc002670400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc00fbf0160, 0xc002670400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc00fbf0160, 0xc002670400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc00fbf0160, 0xc002670400)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc00fbf0160, 0xc002670400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc00fbf0160, 0xc002670400)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc00fbf0160, 0xc002670400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc00fbf0160, 0xc002670400)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc00fbf0160, 0xc002670400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc00fbf0160, 0xc002670300)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc00fbf0160, 0xc002670300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00bb2b680, 0xc017da7e20, 0x73aeec0, 0xc00fbf0160, 0xc002670300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.581255  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.501589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.581613  109070 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0515 22:49:32.600276  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.543769ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.602235  109070 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.425157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.621076  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.294619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.621390  109070 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0515 22:49:32.640302  109070 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.49823ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.642208  109070 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.331624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.661615  109070 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0515 22:49:32.661653  109070 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0515 22:49:32.661689  109070 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.941616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.661927  109070 wrap.go:47] GET /healthz: (2.181107ms) 500
goroutine 37392 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009379b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009379b20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c18cea0, 0x1f4)
net/http.Error(0x7f1da4b9d910, 0xc017441eb8, 0xc003185b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e800)
net/http.HandlerFunc.ServeHTTP(0xc012f0cec0, 0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0110abe40, 0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017d9f340, 0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017c3ef30, 0xc017d9f340, 0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e800)
net/http.HandlerFunc.ServeHTTP(0xc017db8180, 0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e800)
net/http.HandlerFunc.ServeHTTP(0xc014951b30, 0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e800)
net/http.HandlerFunc.ServeHTTP(0xc017db81c0, 0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e500)
net/http.HandlerFunc.ServeHTTP(0xc017d878b0, 0x7f1da4b9d910, 0xc017441eb8, 0xc00a95e500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0099b9320, 0xc017da7e20, 0x73aeec0, 0xc017441eb8, 0xc00a95e500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:49582]
I0515 22:49:32.662283  109070 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0515 22:49:32.672770  109070 wrap.go:47] GET /healthz: (1.323263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.674411  109070 wrap.go:47] GET /api/v1/namespaces/default: (1.154939ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.676952  109070 wrap.go:47] POST /api/v1/namespaces: (2.042756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.678597  109070 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.233375ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.682808  109070 wrap.go:47] POST /api/v1/namespaces/default/services: (3.736047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.684476  109070 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.214166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.687033  109070 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (2.010063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.761126  109070 wrap.go:47] GET /healthz: (1.245244ms) 200 [Go-http-client/1.1 127.0.0.1:49584]
W0515 22:49:32.762054  109070 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0515 22:49:32.762108  109070 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0515 22:49:32.762140  109070 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0515 22:49:32.762160  109070 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0515 22:49:32.762172  109070 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0515 22:49:32.762184  109070 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0515 22:49:32.762196  109070 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0515 22:49:32.762213  109070 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0515 22:49:32.762223  109070 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0515 22:49:32.762237  109070 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0515 22:49:32.762307  109070 factory.go:337] Creating scheduler from algorithm provider 'DefaultProvider'
I0515 22:49:32.762322  109070 factory.go:418] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0515 22:49:32.762580  109070 controller_utils.go:1029] Waiting for caches to sync for scheduler controller
I0515 22:49:32.762823  109070 reflector.go:122] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:209
I0515 22:49:32.762840  109070 reflector.go:160] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:209
I0515 22:49:32.764085  109070 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (679.934µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49584]
I0515 22:49:32.765058  109070 get.go:250] Starting watch for /api/v1/pods, rv=25244 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=9m39s
I0515 22:49:32.862821  109070 shared_informer.go:176] caches populated
I0515 22:49:32.862875  109070 controller_utils.go:1036] Caches are synced for scheduler controller
I0515 22:49:32.863269  109070 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.863294  109070 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.863368  109070 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.863439  109070 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.863691  109070 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.863714  109070 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.863985  109070 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.864002  109070 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.864153  109070 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.864172  109070 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.864400  109070 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (741.834µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49582]
I0515 22:49:32.864427  109070 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.864462  109070 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.864675  109070 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.864695  109070 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.865188  109070 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.865208  109070 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.865627  109070 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (377.445µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49602]
I0515 22:49:32.865690  109070 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (551.613µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49600]
I0515 22:49:32.866155  109070 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (427.029µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49604]
I0515 22:49:32.866178  109070 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (377.298µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49596]
I0515 22:49:32.866190  109070 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (1.12016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49606]
I0515 22:49:32.866934  109070 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.866957  109070 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0515 22:49:32.867072  109070 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (788.655µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49598]
I0515 22:49:32.867412  109070 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=25244 labels= fields= timeout=9m54s
I0515 22:49:32.867562  109070 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=25251 labels= fields= timeout=5m45s
I0515 22:49:32.867802  109070 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=25245 labels= fields= timeout=6m38s
I0515 22:49:32.867991  109070 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (342.028µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49606]
I0515 22:49:32.868047  109070 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=25251 labels= fields= timeout=7m59s
I0515 22:49:32.868546  109070 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=25244 labels= fields= timeout=6m35s
I0515 22:49:32.868621  109070 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (539.918µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49596]
I0515 22:49:32.868743  109070 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=25251 labels= fields= timeout=6m35s
I0515 22:49:32.869665  109070 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=25250 labels= fields= timeout=8m56s
I0515 22:49:32.869920  109070 get.go:250] Starting watch for /api/v1/services, rv=25422 labels= fields= timeout=5m16s
I0515 22:49:32.870828  109070 get.go:250] Starting watch for /api/v1/nodes, rv=25244 labels= fields= timeout=8m37s
E0515 22:49:32.880156  109070 factory.go:695] Error getting pod permit-plugin0291bcb5-247b-44d4-89de-592e829e6d02/signalling-pod for retry: Get http://127.0.0.1:38489/api/v1/namespaces/permit-plugin0291bcb5-247b-44d4-89de-592e829e6d02/pods/signalling-pod: dial tcp 127.0.0.1:38489: connect: connection refused; retrying...
I0515 22:49:32.963236  109070 shared_informer.go:176] caches populated
I0515 22:49:33.063529  109070 shared_informer.go:176] caches populated
I0515 22:49:33.163791  109070 shared_informer.go:176] caches populated
I0515 22:49:33.264083  109070 shared_informer.go:176] caches populated
I0515 22:49:33.364595  109070 shared_informer.go:176] caches populated
I0515 22:49:33.464817  109070 shared_informer.go:176] caches populated
E0515 22:49:33.562019  109070 event.go:249] Unable to write event: 'Post http://127.0.0.1:38489/api/v1/namespaces/permit-plugin0291bcb5-247b-44d4-89de-592e829e6d02/events: dial tcp 127.0.0.1:38489: connect: connection refused' (may retry after sleeping)
I0515 22:49:33.565833  109070 shared_informer.go:176] caches populated
I0515 22:49:33.666126  109070 shared_informer.go:176] caches populated
I0515 22:49:33.766341  109070 shared_informer.go:176] caches populated
I0515 22:49:33.866571  109070 shared_informer.go:176] caches populated
I0515 22:49:33.867362  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:33.867406  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:33.868340  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:33.869482  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:33.869789  109070 wrap.go:47] POST /api/v1/nodes: (2.415047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49764]
I0515 22:49:33.870534  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:33.872522  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.151567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49764]
I0515 22:49:33.873115  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0
I0515 22:49:33.873137  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0
I0515 22:49:33.873308  109070 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0", node "node1"
I0515 22:49:33.873327  109070 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0", node "node1": all PVCs bound and nothing to do
I0515 22:49:33.873369  109070 factory.go:711] Attempting to bind rpod-0 to node1
I0515 22:49:33.875030  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.859096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49764]
I0515 22:49:33.875642  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1
I0515 22:49:33.875667  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1
I0515 22:49:33.875773  109070 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1", node "node1"
I0515 22:49:33.875792  109070 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1", node "node1": all PVCs bound and nothing to do
I0515 22:49:33.875838  109070 factory.go:711] Attempting to bind rpod-1 to node1
I0515 22:49:33.878999  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0/binding: (5.083803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49766]
I0515 22:49:33.878999  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-1/binding: (2.906409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49764]
I0515 22:49:33.879267  109070 scheduler.go:589] pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0515 22:49:33.879328  109070 scheduler.go:589] pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0515 22:49:33.881726  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.090255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49764]
I0515 22:49:33.884847  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.662106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49764]
I0515 22:49:33.978045  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0: (1.911435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49764]
I0515 22:49:34.081831  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-1: (2.367374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49764]
I0515 22:49:34.082756  109070 preemption_test.go:561] Creating the preemptor pod...
I0515 22:49:34.087112  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.113074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49764]
I0515 22:49:34.087672  109070 preemption_test.go:567] Creating additional pods...
I0515 22:49:34.088174  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod
I0515 22:49:34.088204  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod
I0515 22:49:34.088340  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.088399  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.092887  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (3.203802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49768]
I0515 22:49:34.093608  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.804621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49764]
I0515 22:49:34.093945  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod/status: (3.810403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49766]
I0515 22:49:34.095184  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.556712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49770]
I0515 22:49:34.096779  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.813293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49766]
I0515 22:49:34.096839  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.757028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49768]
I0515 22:49:34.097380  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0515 22:49:34.097540  109070 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0515 22:49:34.097558  109070 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0515 22:49:34.100716  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod/status: (2.725104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49766]
I0515 22:49:34.101687  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.65291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49768]
I0515 22:49:34.103965  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.65282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49768]
I0515 22:49:34.105768  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.411356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49768]
I0515 22:49:34.107917  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.765697ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49768]
I0515 22:49:34.109928  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.280687ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49768]
I0515 22:49:34.110925  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0: (9.80489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49766]
I0515 22:49:34.111391  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:34.111410  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:34.111595  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.111639  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.113207  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.877106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49766]
I0515 22:49:34.114368  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0/status: (2.410355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49770]
I0515 22:49:34.114626  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.612217ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49768]
I0515 22:49:34.114741  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (2.491583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49772]
I0515 22:49:34.117000  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (1.100525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49770]
I0515 22:49:34.117385  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.117408  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.568344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49766]
I0515 22:49:34.117854  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:34.117879  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:34.117984  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.118031  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.118735  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.478893ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49768]
I0515 22:49:34.122006  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.75502ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49774]
I0515 22:49:34.123320  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (4.070671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49772]
I0515 22:49:34.124044  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.793261ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49768]
I0515 22:49:34.125081  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1/status: (6.734586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49770]
I0515 22:49:34.127002  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.153805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49772]
I0515 22:49:34.128645  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (3.014459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49770]
I0515 22:49:34.129380  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.129589  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:34.129607  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:34.129646  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.100881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49772]
I0515 22:49:34.129707  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.129753  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.133203  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.024326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49778]
I0515 22:49:34.136241  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (4.817096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49776]
I0515 22:49:34.137291  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (7.217764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49774]
I0515 22:49:34.139313  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2/status: (8.608414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49770]
I0515 22:49:34.181393  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (43.488338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49776]
I0515 22:49:34.181836  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (42.006323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49770]
I0515 22:49:34.182180  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.182480  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:34.182521  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:34.182639  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.182689  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.185581  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.293326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49770]
I0515 22:49:34.193784  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (9.116993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49810]
I0515 22:49:34.194151  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3/status: (11.175098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49778]
I0515 22:49:34.194225  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (9.206826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49808]
I0515 22:49:34.196441  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (1.600913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49778]
I0515 22:49:34.196752  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.196960  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:34.197005  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:34.197144  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.197202  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.198954  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.408239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49808]
I0515 22:49:34.201199  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.772938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49808]
I0515 22:49:34.202470  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (4.329153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49812]
I0515 22:49:34.202687  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (4.464956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49810]
I0515 22:49:34.203082  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4/status: (5.639149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49778]
I0515 22:49:34.204797  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.652644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49812]
I0515 22:49:34.205066  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (1.481529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49778]
I0515 22:49:34.206704  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.207439  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:34.207481  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:34.207485  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.274123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49812]
I0515 22:49:34.208747  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.208805  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.210063  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.026187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49778]
I0515 22:49:34.212882  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (3.435389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49814]
I0515 22:49:34.213331  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.580758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49778]
I0515 22:49:34.214204  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (4.01184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49816]
I0515 22:49:34.215579  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.634718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49778]
I0515 22:49:34.216551  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5/status: (7.105099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49808]
I0515 22:49:34.217912  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.290621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49816]
I0515 22:49:34.218344  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (1.070236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49808]
I0515 22:49:34.218755  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.219123  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:34.219200  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:34.219394  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.219569  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.219736  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.432967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49816]
I0515 22:49:34.221324  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (1.10982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49816]
I0515 22:49:34.222190  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.833127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49814]
I0515 22:49:34.222740  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.746173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49818]
I0515 22:49:34.223165  109070 cacher.go:739] cacher (*core.Pod): 1 objects queued in incoming channel.
I0515 22:49:34.225350  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.961702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49814]
I0515 22:49:34.229826  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6/status: (9.901992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49808]
I0515 22:49:34.230422  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.295524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49814]
I0515 22:49:34.232649  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (1.919963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49808]
I0515 22:49:34.233028  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.002669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49814]
I0515 22:49:34.233365  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.233576  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:34.233602  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:34.233699  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.233752  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.235417  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (874.176µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49820]
I0515 22:49:34.235479  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.972069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49814]
I0515 22:49:34.236593  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7/status: (2.145245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49816]
I0515 22:49:34.237334  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.763856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49822]
I0515 22:49:34.238587  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.873692ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49814]
I0515 22:49:34.238708  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (1.221989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49816]
I0515 22:49:34.238995  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.239201  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:34.239278  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:34.239757  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.239812  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.241317  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.938589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49822]
I0515 22:49:34.243980  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.99755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49826]
I0515 22:49:34.244255  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8/status: (2.81887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49820]
I0515 22:49:34.246066  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (2.372605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49822]
I0515 22:49:34.246409  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.578964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49828]
E0515 22:49:34.247768  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:34.248970  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (1.697197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49820]
I0515 22:49:34.249306  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.249537  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:34.249565  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:34.249667  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.249718  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.252405  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.871979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49822]
I0515 22:49:34.253606  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (2.783128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49830]
I0515 22:49:34.254034  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9/status: (2.892133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49826]
I0515 22:49:34.255645  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.469248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49822]
I0515 22:49:34.256661  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.230315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49834]
I0515 22:49:34.261154  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (5.14218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49826]
I0515 22:49:34.263970  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.264210  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:34.264231  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:34.264389  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.264544  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.267803  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10/status: (2.539361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49826]
I0515 22:49:34.268314  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.353952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49830]
I0515 22:49:34.270629  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (2.477017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49836]
I0515 22:49:34.268665  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.950441ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49822]
I0515 22:49:34.274318  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (2.833364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49830]
I0515 22:49:34.274916  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.275114  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.661721ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49826]
I0515 22:49:34.275198  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:34.275315  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:34.275466  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.275570  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.278956  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.161907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49826]
I0515 22:49:34.279241  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (3.058972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
I0515 22:49:34.279398  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.131607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49840]
I0515 22:49:34.282669  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.391789ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
I0515 22:49:34.285518  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.862869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
I0515 22:49:34.289836  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.617534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
I0515 22:49:34.292673  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.171968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
I0515 22:49:34.294990  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.71333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
E0515 22:49:34.296693  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:34.297617  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.986919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
I0515 22:49:34.298269  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11/status: (16.067705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49830]
I0515 22:49:34.300347  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (1.55157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
I0515 22:49:34.300637  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.300856  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:34.300895  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:34.301113  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.713215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49840]
I0515 22:49:34.301177  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.301649  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.302974  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (1.08943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
I0515 22:49:34.304672  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12/status: (2.396368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49842]
I0515 22:49:34.305428  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.149318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49844]
I0515 22:49:34.306247  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (1.0541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49842]
I0515 22:49:34.306628  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.306947  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:34.307017  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:34.307135  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.307192  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.308091  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (6.410233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49840]
I0515 22:49:34.309574  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (1.928361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
I0515 22:49:34.311661  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.093558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49846]
I0515 22:49:34.312626  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.011988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49840]
I0515 22:49:34.313888  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13/status: (6.032631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49844]
I0515 22:49:34.315778  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (1.459276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49844]
I0515 22:49:34.317326  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.035376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49846]
I0515 22:49:34.318015  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.318329  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:34.318357  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:34.318617  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.318676  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.321097  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.698899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49850]
I0515 22:49:34.321181  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.301492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49844]
I0515 22:49:34.324090  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (3.901776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49848]
I0515 22:49:34.324477  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.855292ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49844]
I0515 22:49:34.324785  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14/status: (4.737742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
E0515 22:49:34.325956  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:34.327194  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (1.087592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
I0515 22:49:34.327425  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.327921  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.702498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49844]
I0515 22:49:34.328297  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:34.328321  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:34.328458  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.328556  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.331250  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.445784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49852]
I0515 22:49:34.331933  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15/status: (2.992716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
I0515 22:49:34.333244  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (1.472127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49850]
E0515 22:49:34.333553  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:34.333758  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (1.368995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49838]
I0515 22:49:34.334064  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.334260  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:34.334279  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:34.334458  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.334538  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.337338  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16/status: (2.567348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49852]
I0515 22:49:34.337768  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.600715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
I0515 22:49:34.338079  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (2.64926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49850]
E0515 22:49:34.338356  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:34.339485  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (1.681499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49852]
I0515 22:49:34.339956  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.340475  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:34.340513  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:34.340670  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.340726  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.342131  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (1.122453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
I0515 22:49:34.342968  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.65641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0515 22:49:34.342997  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17/status: (2.010289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49850]
I0515 22:49:34.344791  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (1.199113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0515 22:49:34.345147  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.345323  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:34.345345  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:34.345481  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.345558  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.347742  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.593266ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49858]
I0515 22:49:34.348022  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18/status: (2.218525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0515 22:49:34.348424  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (2.619754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
E0515 22:49:34.348732  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:34.349516  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (1.0762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49856]
I0515 22:49:34.349854  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.350099  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:34.350118  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:34.350236  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.350304  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.351719  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (1.153475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
I0515 22:49:34.352426  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19/status: (1.836016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49858]
I0515 22:49:34.352850  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.923753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49860]
I0515 22:49:34.353961  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (1.129168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49858]
I0515 22:49:34.354261  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.354473  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:34.354511  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:34.354625  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.354675  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.356746  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (1.858874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49860]
I0515 22:49:34.356996  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.650072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49862]
I0515 22:49:34.357567  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20/status: (2.633837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49854]
I0515 22:49:34.359133  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (1.090339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49862]
I0515 22:49:34.359385  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.359671  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:34.359696  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:34.359827  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.359885  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.362187  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.646891ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49864]
I0515 22:49:34.362790  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21/status: (2.656051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49862]
I0515 22:49:34.363718  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (3.582087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49860]
E0515 22:49:34.364259  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:34.364388  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (1.066623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49862]
I0515 22:49:34.364711  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.364889  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:34.364911  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:34.365022  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.365081  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.366985  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (1.382729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49864]
I0515 22:49:34.367591  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.7959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49866]
I0515 22:49:34.368906  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22/status: (3.499198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49860]
I0515 22:49:34.370365  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (1.040363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49866]
I0515 22:49:34.370716  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.370957  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:34.371091  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:34.371207  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.371261  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.373017  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (1.136049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49864]
I0515 22:49:34.373658  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23/status: (2.107621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49866]
I0515 22:49:34.373905  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.748321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49868]
E0515 22:49:34.374590  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:34.376223  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (1.333231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49866]
I0515 22:49:34.376568  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.376772  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:34.376796  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:34.376895  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.377014  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.379527  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (1.759572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49864]
I0515 22:49:34.380253  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.391066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49870]
I0515 22:49:34.383580  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24/status: (6.283359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49866]
I0515 22:49:34.385587  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (1.332715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49870]
I0515 22:49:34.386318  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.386579  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:34.386668  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:34.386801  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.386861  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.389242  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.92245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49864]
I0515 22:49:34.389258  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25/status: (2.082694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49870]
I0515 22:49:34.390905  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (1.2344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49870]
E0515 22:49:34.391120  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:34.391359  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (1.366229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49864]
I0515 22:49:34.391695  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.391882  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26
I0515 22:49:34.391900  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26
I0515 22:49:34.392049  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.392133  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.393521  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (1.04175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49870]
I0515 22:49:34.394189  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.286244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49874]
I0515 22:49:34.396122  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26/status: (3.644367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49872]
I0515 22:49:34.397830  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (1.137059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49870]
I0515 22:49:34.398135  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.398353  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:34.398375  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:34.398528  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.398590  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.401378  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.312912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49874]
I0515 22:49:34.401414  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (1.207271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49876]
I0515 22:49:34.401414  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27/status: (2.588356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49870]
I0515 22:49:34.403235  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (1.092701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49876]
I0515 22:49:34.403539  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.403806  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:34.403831  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:34.403989  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.404047  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.405683  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (1.373385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49870]
I0515 22:49:34.406249  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28/status: (1.932103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49876]
I0515 22:49:34.407359  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.604845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49878]
E0515 22:49:34.407926  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:34.407990  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (1.280983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49876]
I0515 22:49:34.408272  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.408535  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:34.408562  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:34.408689  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.408751  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.411343  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.855992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49880]
I0515 22:49:34.411744  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (1.686081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49870]
I0515 22:49:34.412210  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29/status: (3.21675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49878]
I0515 22:49:34.413900  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (1.145277ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49870]
I0515 22:49:34.414161  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.414540  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:34.414564  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:34.414702  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.414758  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.416096  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (1.035472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49880]
I0515 22:49:34.416738  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30/status: (1.729915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49870]
I0515 22:49:34.417359  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.030069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49882]
I0515 22:49:34.418389  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (1.166057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49870]
I0515 22:49:34.418765  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.418925  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:34.418946  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:34.419052  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.419102  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.420589  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (1.279478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49882]
I0515 22:49:34.420958  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.421111  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:34.421130  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:34.421257  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.421302  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.422245  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (2.183404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49880]
I0515 22:49:34.424733  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (2.872785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49886]
I0515 22:49:34.424967  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31/status: (2.216508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49880]
I0515 22:49:34.426167  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-8.159efce349a06990: (5.094471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49882]
I0515 22:49:34.426914  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (1.561474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49886]
I0515 22:49:34.427172  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.427396  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:34.427763  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:34.428285  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.428347  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.429419  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.473448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49884]
I0515 22:49:34.430205  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.681702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49886]
I0515 22:49:34.430206  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (1.309122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49890]
I0515 22:49:34.432784  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32/status: (3.950435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49888]
I0515 22:49:34.432924  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.80068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49884]
I0515 22:49:34.434470  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (1.223987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49890]
I0515 22:49:34.434824  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.435004  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:34.435076  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:34.435240  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.435297  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.436928  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (1.260142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49886]
I0515 22:49:34.437815  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.746403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.440122  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33/status: (4.557189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49890]
I0515 22:49:34.442337  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (1.308046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.442780  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.442986  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:34.443014  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:34.443131  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.443193  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.445463  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.715276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49886]
I0515 22:49:34.445912  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34/status: (2.416377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.447528  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (1.12102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.447800  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.447877  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (1.145685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49886]
E0515 22:49:34.448243  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:34.448350  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:34.448368  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:34.448598  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.448659  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.450306  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (1.408933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.450401  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (1.492067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49886]
I0515 22:49:34.450631  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.450771  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:34.450839  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:34.451008  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.451058  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.453141  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35/status: (1.820074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49886]
I0515 22:49:34.453411  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (2.025077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.455391  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (1.309768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.455689  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.455711  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-10.159efce34b184449: (5.543818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49894]
I0515 22:49:34.456143  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:34.456170  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:34.456327  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.456438  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.457930  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.774659ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.458001  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (1.267008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49886]
I0515 22:49:34.460392  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36/status: (1.955241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49886]
I0515 22:49:34.460549  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.107363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.463068  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (2.067014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49886]
I0515 22:49:34.463583  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.464058  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37
I0515 22:49:34.464133  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37
I0515 22:49:34.464329  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.464400  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.467012  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (2.177242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.467404  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.2853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49898]
I0515 22:49:34.469195  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37/status: (3.102438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49896]
I0515 22:49:34.471785  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (1.384934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49898]
I0515 22:49:34.472130  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.472429  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:34.472462  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:34.472685  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.472749  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.474329  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (1.28642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.475418  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.983053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49900]
I0515 22:49:34.478054  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38/status: (5.022193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49898]
I0515 22:49:34.480875  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (2.279458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49900]
I0515 22:49:34.481508  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.481848  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:34.481888  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:34.482074  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.482134  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.484656  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.84863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0515 22:49:34.486514  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39/status: (4.077132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49900]
I0515 22:49:34.486907  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (4.495465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
E0515 22:49:34.487923  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:34.489352  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (1.09729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.489753  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.489944  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:34.489964  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:34.490089  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.490144  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.492356  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (1.478123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0515 22:49:34.492714  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40/status: (2.30849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.493675  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.385469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49998]
I0515 22:49:34.494757  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (1.219571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49996]
I0515 22:49:34.495073  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.495264  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:34.495330  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:34.495476  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.495549  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.497590  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (1.484303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49998]
I0515 22:49:34.498068  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.978519ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.498913  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41/status: (2.301699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50000]
I0515 22:49:34.501003  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (1.403929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50000]
I0515 22:49:34.501295  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.501510  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:34.501537  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:34.501742  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.501806  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.503087  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (1.008029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.503787  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.342324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50002]
I0515 22:49:34.504612  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42/status: (2.533271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49998]
I0515 22:49:34.506673  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (1.366966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50002]
I0515 22:49:34.507146  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.507477  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:34.507523  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:34.507639  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.507706  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.509352  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (1.350074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50002]
I0515 22:49:34.510679  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43/status: (2.178159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.513070  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.912753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50004]
I0515 22:49:34.513429  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (2.306525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:49892]
I0515 22:49:34.513868  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.514098  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:34.514253  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:34.514390  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.514479  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.516303  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (1.393839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50004]
I0515 22:49:34.517105  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44/status: (1.967417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50002]
I0515 22:49:34.517592  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.17643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50006]
I0515 22:49:34.519121  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (1.064522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50002]
I0515 22:49:34.519474  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.519722  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:34.519745  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:34.519839  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.519893  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.521887  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (1.58131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50004]
I0515 22:49:34.522562  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45/status: (2.406983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50002]
I0515 22:49:34.523518  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.833849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50008]
I0515 22:49:34.524016  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (1.077544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50002]
I0515 22:49:34.524316  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.524534  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46
I0515 22:49:34.524557  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46
I0515 22:49:34.524735  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.524800  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.526044  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (1.008334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50004]
I0515 22:49:34.527535  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46/status: (2.496941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50008]
I0515 22:49:34.528108  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.663638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50010]
I0515 22:49:34.529049  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (1.035993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50008]
I0515 22:49:34.529356  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.529633  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47
I0515 22:49:34.529658  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47
I0515 22:49:34.529756  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.529813  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.532601  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (2.029602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50004]
I0515 22:49:34.532625  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.937865ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50012]
I0515 22:49:34.533291  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (2.250737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50014]
I0515 22:49:34.533302  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47/status: (3.219488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50010]
I0515 22:49:34.535136  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (1.19671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50012]
I0515 22:49:34.535420  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.535635  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48
I0515 22:49:34.535661  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48
I0515 22:49:34.535770  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.535826  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.538033  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (1.955916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50004]
I0515 22:49:34.538195  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48/status: (2.121199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50012]
I0515 22:49:34.538779  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.361932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50016]
I0515 22:49:34.540223  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (1.293654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50012]
I0515 22:49:34.540577  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.540768  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:34.540787  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:34.540908  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.540963  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.543951  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-14.159efce34e53c98d: (2.208637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50018]
I0515 22:49:34.544239  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (2.496429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50004]
I0515 22:49:34.544594  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (2.792012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50016]
I0515 22:49:34.544965  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.545144  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:34.545163  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:34.545265  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.545370  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.547115  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (1.347695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50018]
I0515 22:49:34.548411  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.192249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50020]
I0515 22:49:34.548774  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49/status: (3.065832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50004]
I0515 22:49:34.551035  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (1.219342ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50020]
I0515 22:49:34.551263  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.551434  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:34.551472  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:34.551626  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.551686  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.553193  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (1.181306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50018]
I0515 22:49:34.553866  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (1.983651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50020]
I0515 22:49:34.554694  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.554807  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-15.159efce34eea7f1b: (1.975398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50022]
I0515 22:49:34.554940  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:34.554955  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:34.555050  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.555093  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.556910  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (1.511545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50020]
I0515 22:49:34.557814  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (1.910443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50018]
I0515 22:49:34.558129  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.558294  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:34.558318  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:34.558342  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-16.159efce34f45ce19: (2.384011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50024]
I0515 22:49:34.558556  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.558636  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.559967  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (1.130188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50018]
I0515 22:49:34.561359  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-18.159efce34fedf687: (1.646537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:34.562063  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (3.201694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50020]
I0515 22:49:34.562486  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.562727  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:34.562748  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:34.562851  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.562902  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.564236  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (1.163064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:34.564876  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.565007  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (1.775593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50018]
I0515 22:49:34.565609  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-21.159efce350c89557: (1.971697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50028]
I0515 22:49:34.565719  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:34.565740  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:34.565849  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.565903  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.567376  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (1.123317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:34.568060  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (1.9804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50018]
I0515 22:49:34.568316  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.568556  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:34.569051  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:34.568841  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-23.159efce351762d4f: (1.838337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50030]
I0515 22:49:34.569258  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.569322  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.570598  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (1.109153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50018]
I0515 22:49:34.570744  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (1.17503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:34.571324  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.571673  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:34.571952  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:34.571988  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-25.159efce352643e01: (1.942894ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50030]
I0515 22:49:34.572082  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.572125  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.573739  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (1.355634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:34.573987  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.574171  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:34.574190  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:34.574307  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.574357  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.575583  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-28.159efce3536a777d: (2.692969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50018]
I0515 22:49:34.577238  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (2.067518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:34.577433  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (3.773471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50032]
I0515 22:49:34.578093  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (2.594451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50034]
I0515 22:49:34.578619  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.578789  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-34.159efce355bfc570: (2.353325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50018]
I0515 22:49:34.578946  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:34.578973  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:34.579214  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:34.579413  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:34.580930  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (1.273922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50034]
I0515 22:49:34.581116  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (1.424017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:34.581322  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:34.582536  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-39.159efce35811fb27: (2.51488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50036]
I0515 22:49:34.632951  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.919898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:34.735474  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (2.469569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:34.834018  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.779779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:34.867561  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:34.867632  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:34.869091  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:34.870287  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:34.870944  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:34.932895  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.848891ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:35.032706  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.647035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:35.132848  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.811183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:35.233118  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.968852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:35.332957  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.889597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:35.432785  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.76108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:35.533049  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.99638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:35.633389  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (2.349766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:35.733118  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (2.090865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:35.764271  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod
I0515 22:49:35.764307  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod
I0515 22:49:35.764484  109070 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod", node "node1"
I0515 22:49:35.764528  109070 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0515 22:49:35.764582  109070 factory.go:711] Attempting to bind preemptor-pod to node1
I0515 22:49:35.764625  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:35.764644  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:35.764789  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.764850  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.768302  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (2.757152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50046]
I0515 22:49:35.770045  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-0.159efce341fcaacb: (2.145953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50048]
I0515 22:49:35.770948  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod/binding: (6.034812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50026]
I0515 22:49:35.771185  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (5.155528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50034]
I0515 22:49:35.771196  109070 scheduler.go:589] pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0515 22:49:35.771530  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.771769  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:35.771794  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:35.771948  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.772010  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.773931  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.491877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50048]
I0515 22:49:35.773997  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (1.760971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50046]
I0515 22:49:35.774212  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.774804  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:35.774821  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:35.774914  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.774958  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.777015  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-1.159efce3425e3059: (2.417324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50048]
I0515 22:49:35.777360  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (2.951324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50046]
I0515 22:49:35.787755  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (11.558632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50050]
I0515 22:49:35.788332  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-2.159efce3431114fc: (10.034083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50048]
I0515 22:49:35.788892  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.792759  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:35.792791  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:35.792977  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.793034  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.815201  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (20.910373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50048]
I0515 22:49:35.815594  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.816240  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (21.992451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50054]
I0515 22:49:35.816306  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:35.816326  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:35.816517  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.816596  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.819065  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-3.159efce34638cd44: (24.675375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50056]
I0515 22:49:35.819303  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (1.818167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50054]
I0515 22:49:35.819582  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.819958  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (2.960929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50048]
I0515 22:49:35.820381  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:35.820399  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:35.820556  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.820618  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.827324  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (51.440561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50052]
I0515 22:49:35.827958  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-4.159efce347163494: (8.224809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50054]
I0515 22:49:35.830289  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (9.424973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50048]
I0515 22:49:35.830687  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.831440  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-5.159efce347c7475f: (2.630158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50054]
I0515 22:49:35.833913  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:35.833937  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:35.834081  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.834130  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.836437  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (5.3817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50048]
I0515 22:49:35.837835  109070 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0515 22:49:35.839229  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (3.843853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50052]
I0515 22:49:35.840324  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (4.45667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50054]
I0515 22:49:35.840616  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.841243  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-6.159efce3486b6cb6: (3.211374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50060]
I0515 22:49:35.842352  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:35.842377  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:35.842523  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.842573  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.854107  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (14.380126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50048]
I0515 22:49:35.855097  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-7.159efce34943fc1e: (3.294978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50062]
I0515 22:49:35.855706  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (2.450223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50054]
I0515 22:49:35.856581  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.857009  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:35.857033  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:35.857143  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.857191  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.860365  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (4.368145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50048]
I0515 22:49:35.861260  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-9.159efce34a3793c7: (2.724594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50064]
I0515 22:49:35.866955  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (7.11864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50054]
I0515 22:49:35.868138  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (6.751898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50048]
I0515 22:49:35.868252  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.867371  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (14.400856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50052]
I0515 22:49:35.867640  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (8.153665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50062]
I0515 22:49:35.867793  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:35.867810  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:35.867898  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (46.846732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50056]
I0515 22:49:35.869162  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:35.869198  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:35.869328  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.869376  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.869261  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:35.872202  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:35.873642  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:35.878065  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (4.155129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50054]
I0515 22:49:35.878648  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (6.948576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50064]
I0515 22:49:35.881341  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-11.159efce34bc20153: (8.739108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50066]
I0515 22:49:35.881360  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (9.599849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50056]
I0515 22:49:35.881945  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.882257  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:35.882317  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:35.882484  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.893612  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.902107  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (6.606016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50064]
I0515 22:49:35.902532  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (6.756345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50068]
I0515 22:49:35.902836  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (11.40015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50066]
I0515 22:49:35.903245  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.903477  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:35.903527  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:35.903669  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.903722  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.906851  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (2.77349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50066]
I0515 22:49:35.907836  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (3.900144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50068]
I0515 22:49:35.908281  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (4.889804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50064]
I0515 22:49:35.908868  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.909164  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:35.909198  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:35.909302  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.909345  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.917832  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (6.961369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50068]
I0515 22:49:35.918132  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (7.666298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50066]
I0515 22:49:35.918588  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.919062  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:35.919081  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:35.919213  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.919257  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.931385  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (18.018693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50064]
I0515 22:49:35.935823  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (3.797951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50064]
I0515 22:49:35.936587  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (16.778345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50068]
I0515 22:49:35.936956  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.937199  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:35.937220  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:35.937364  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.937560  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.939400  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (19.66117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50066]
I0515 22:49:35.940113  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (3.677661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50064]
I0515 22:49:35.941635  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (3.354229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50072]
I0515 22:49:35.942099  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (3.384798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50068]
I0515 22:49:35.943710  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (3.267949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50064]
I0515 22:49:35.943959  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.944724  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:35.944757  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:35.944862  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.944904  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.952008  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (5.680115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50072]
I0515 22:49:35.952308  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (6.8187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:35.963807  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-12.159efce34d4fd9cd: (68.509245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50070]
I0515 22:49:35.972718  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-13.159efce34da48032: (8.108788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50070]
I0515 22:49:35.974755  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (21.806425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50072]
I0515 22:49:35.975056  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (28.985492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50066]
I0515 22:49:35.975300  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.975836  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:35.975855  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:35.975974  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.976033  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.977034  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-17.159efce34fa4481b: (3.316843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50070]
I0515 22:49:35.979179  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (2.984727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50066]
I0515 22:49:35.979832  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.980073  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26
I0515 22:49:35.980088  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26
I0515 22:49:35.980174  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.980216  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.981715  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (5.389454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:35.982866  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (1.659498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50066]
I0515 22:49:35.983172  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (2.32039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50076]
I0515 22:49:35.983881  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-19.159efce35036157a: (5.836619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50070]
I0515 22:49:35.986345  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (11.136758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50072]
I0515 22:49:35.987638  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:35.987878  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:35.987899  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:35.988013  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:35.988058  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:35.992843  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-20.159efce350791e77: (7.761146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50076]
I0515 22:49:35.999946  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-22.159efce35117e1e1: (6.384423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50076]
I0515 22:49:36.024359  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (35.964961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.024837  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (36.599901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50066]
I0515 22:49:36.025133  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (38.197365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50072]
I0515 22:49:36.025275  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.025601  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:36.025619  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:36.025726  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.025766  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.027690  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-24.159efce351cde3ec: (27.033157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50076]
I0515 22:49:36.030721  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (4.352526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50066]
I0515 22:49:36.032345  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-26.159efce352b4a114: (4.058585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50076]
I0515 22:49:36.035802  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (4.622935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50066]
I0515 22:49:36.036133  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (9.200429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50078]
I0515 22:49:36.036479  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (10.140149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.036814  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.037295  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:36.037313  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:36.037419  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.037480  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.047732  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (9.056454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50080]
I0515 22:49:36.048080  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.048145  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (10.380328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.048548  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:36.048567  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:36.048719  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.048786  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.049459  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (12.035758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50078]
I0515 22:49:36.050871  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-27.159efce3531714e9: (17.80274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50076]
I0515 22:49:36.051669  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (2.083467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.051799  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (2.644486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50080]
I0515 22:49:36.052205  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.052392  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:36.052408  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:36.052618  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.052664  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.059071  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (6.151771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50080]
I0515 22:49:36.059424  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (6.543116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.059763  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.059956  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:36.060004  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:36.060125  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.060242  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.062746  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (1.542799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.063035  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (2.456593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50080]
I0515 22:49:36.063290  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.063634  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:36.063672  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:36.063785  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.063830  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.065980  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (1.50679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.066305  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (2.185279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50080]
I0515 22:49:36.066753  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.066954  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:36.066975  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:36.067084  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.067127  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.076309  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-29.159efce353b23782: (24.504826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50076]
I0515 22:49:36.091953  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-30.159efce3540ddf14: (14.539712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50076]
I0515 22:49:36.094134  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (26.312116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.094381  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.094481  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (26.997202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50080]
I0515 22:49:36.094788  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (44.644949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50078]
I0515 22:49:36.094927  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:36.094943  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:36.095058  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.095107  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.099897  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (2.639146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.100199  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (2.47467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50080]
I0515 22:49:36.100433  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.100992  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (2.071212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50078]
I0515 22:49:36.102543  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-8.159efce349a06990: (9.781154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50076]
I0515 22:49:36.102548  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:36.102749  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:36.102878  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.102940  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.104699  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (1.529158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.104936  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.105351  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (1.764251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50080]
I0515 22:49:36.107028  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37
I0515 22:49:36.107051  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37
I0515 22:49:36.107173  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.107221  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.111418  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (3.307896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50080]
I0515 22:49:36.111806  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (4.101955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.112308  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.112576  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:36.112599  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:36.112721  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.112773  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.115994  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-31.159efce35471c0f5: (11.995452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50076]
I0515 22:49:36.116381  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (2.79302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50080]
I0515 22:49:36.116680  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.117040  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (1.924681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50082]
I0515 22:49:36.117667  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:36.117688  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:36.117796  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.117843  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.119172  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (5.90291ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.120385  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (2.259075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50080]
I0515 22:49:36.120963  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (2.541495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50076]
I0515 22:49:36.121252  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.121479  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:36.121525  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:36.121630  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.121673  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.128077  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (5.716722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.128684  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (9.180922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50082]
I0515 22:49:36.129340  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-32.159efce354dd405f: (7.640516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50080]
I0515 22:49:36.131625  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (9.058083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.132234  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.133016  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:36.133038  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:36.133156  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.133197  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.134573  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (2.377936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50082]
I0515 22:49:36.135480  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (1.788672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.135623  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (1.879869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50086]
I0515 22:49:36.136098  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.136260  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:36.136278  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:36.136387  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.136441  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.136459  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-33.159efce355474632: (3.416769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.137845  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (2.855445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50082]
I0515 22:49:36.140279  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (3.621867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50086]
I0515 22:49:36.141192  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (2.779418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50082]
I0515 22:49:36.141570  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-10.159efce34b184449: (4.534304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.143646  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (1.572363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.145661  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-35.159efce35637d526: (3.252593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50086]
I0515 22:49:36.145923  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (9.259502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.146464  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.146761  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:36.146820  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:36.146971  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.147069  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.150057  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-36.159efce35689ce93: (3.838693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50086]
I0515 22:49:36.153617  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (5.798443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.153773  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (9.142688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.153891  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (6.003726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50088]
I0515 22:49:36.154158  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.154340  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:36.154362  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:36.154659  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.154845  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.156184  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (1.036889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.156264  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (2.077972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.156426  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.156640  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46
I0515 22:49:36.156663  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46
I0515 22:49:36.156782  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.156846  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.157792  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (1.093783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.158064  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (3.046411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50088]
I0515 22:49:36.159572  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (1.920611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50090]
I0515 22:49:36.159673  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (1.350293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50088]
I0515 22:49:36.160096  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (3.060901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50074]
I0515 22:49:36.160454  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.160627  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47
I0515 22:49:36.160642  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47
I0515 22:49:36.160774  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.160819  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.162464  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-37.159efce3570343f9: (11.652776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50086]
I0515 22:49:36.165855  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-38.159efce35782bff5: (2.499737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50086]
I0515 22:49:36.168898  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (7.859832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.169293  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (8.964463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50088]
I0515 22:49:36.169636  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (8.620643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50090]
I0515 22:49:36.169926  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.170670  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-40.159efce3588c33e7: (3.343882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50086]
I0515 22:49:36.170836  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48
I0515 22:49:36.170859  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48
I0515 22:49:36.170977  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.171022  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.174201  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-41.159efce358dea7b7: (2.234411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.203210  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (31.724699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50090]
I0515 22:49:36.203518  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-42.159efce3593e202e: (28.390185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.203661  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (32.20427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.203789  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (33.492311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50088]
I0515 22:49:36.204768  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.204952  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:36.204973  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:36.205089  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.205147  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.208539  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (1.443412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.208831  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (1.962154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50090]
I0515 22:49:36.209082  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.209204  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (2.793563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50094]
I0515 22:49:36.209287  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:36.209306  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:36.209402  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.209442  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.213265  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (3.53921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50090]
I0515 22:49:36.213594  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (3.950054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.213853  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (3.203155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50096]
I0515 22:49:36.214366  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.215044  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:36.215061  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:36.215162  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.215205  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.217482  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (1.584857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50090]
I0515 22:49:36.217754  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.218078  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (3.364921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50096]
I0515 22:49:36.219704  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (1.314691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50096]
I0515 22:49:36.219952  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (4.322779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.221105  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:36.221125  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:36.221222  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.221262  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.223587  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (1.684899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50090]
I0515 22:49:36.224013  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (3.290588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.224434  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (2.507509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50098]
I0515 22:49:36.226623  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.227248  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:36.227274  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:36.227406  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.227462  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.227569  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-43.159efce35997ec8c: (22.801845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.230368  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (3.480795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.232630  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (1.772627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.232886  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (2.143377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50098]
I0515 22:49:36.232741  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (4.013878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50090]
I0515 22:49:36.233155  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.233382  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:36.233400  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:36.233538  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.233579  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.234400  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (1.225319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.235241  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-44.159efce359ff013b: (6.137637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.235544  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (1.160597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50090]
I0515 22:49:36.235780  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.235904  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:36.235918  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:36.236006  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.236062  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.237182  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (1.628814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.238226  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (1.878432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50090]
I0515 22:49:36.238541  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (1.877931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.238799  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (1.994173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50098]
I0515 22:49:36.240564  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.240372  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (1.19454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.242574  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:36.242600  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:36.242710  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.242751  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.244727  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (1.850669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.245151  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (1.901986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50098]
I0515 22:49:36.245415  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.245622  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:36.245644  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:36.245757  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.245805  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.249332  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (2.374333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50098]
I0515 22:49:36.249700  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-45.159efce35a521a1e: (3.133185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.249752  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.250085  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (2.866445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.250384  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:36.250400  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:36.250543  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.250587  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.251414  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (4.887156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50102]
I0515 22:49:36.252432  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-46.159efce35a9ced3b: (2.196167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.253949  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (4.975237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50100]
I0515 22:49:36.254096  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (3.181279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50098]
I0515 22:49:36.254252  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (3.207506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50084]
I0515 22:49:36.254592  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.254769  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:36.254787  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:36.254886  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:36.254965  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:36.256365  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (1.640345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50098]
I0515 22:49:36.257317  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-47.159efce35ae965b8: (4.207105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.259854  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (2.076147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50098]
I0515 22:49:36.260270  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (3.489931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.260556  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (3.68368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50102]
I0515 22:49:36.260966  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:36.262202  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (1.540156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50098]
I0515 22:49:36.263002  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-48.159efce35b453991: (4.232939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.264631  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (1.495859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50102]
I0515 22:49:36.265594  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-14.159efce34e53c98d: (1.998497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.268546  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-49.159efce35bd6d273: (2.128258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.270910  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-15.159efce34eea7f1b: (1.800951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.271038  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (5.599634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50102]
I0515 22:49:36.273712  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-16.159efce34f45ce19: (2.083967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.274067  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (2.636318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.275872  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (1.446357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.276089  109070 preemption_test.go:598] Cleaning up all pods...
I0515 22:49:36.278616  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-18.159efce34fedf687: (4.234547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.283012  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-21.159efce350c89557: (2.917967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.286887  109070 cacher.go:739] cacher (*core.Event): 1 objects queued in incoming channel.
I0515 22:49:36.287269  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-23.159efce351762d4f: (3.051421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.288070  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:36.288104  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:36.291087  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (12.326727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.292657  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-25.159efce352643e01: (3.384148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.296669  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-34.159efce355bfc570: (3.019919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.296928  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:36.296977  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:36.300552  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (9.07304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.302019  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-39.159efce35811fb27: (4.424149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.304243  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:36.304298  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:36.307811  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-28.159efce3536a777d: (5.130964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.310649  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (9.42837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.312558  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.81982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.316557  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.285927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.317192  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:36.317229  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:36.318949  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.854437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.320683  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (9.445678ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.321595  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.237463ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.324711  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:36.324804  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:36.327143  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (5.806425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.329808  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (4.517604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.332461  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:36.332548  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:36.335063  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (7.367777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.335894  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.060834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.339966  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:36.340016  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:36.341737  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (5.240128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.342980  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.227066ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.345397  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:36.345439  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:36.347670  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.799347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.348576  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (6.463139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.352077  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:36.352116  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:36.354537  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (5.334493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.354613  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.241419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.358647  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:36.358695  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:36.360695  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.670347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.362317  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (7.283861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.366951  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:36.366991  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:36.369710  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (6.450549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.370117  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.711703ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.374883  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:36.374936  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:36.382137  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.651898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.382609  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (11.052699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.389109  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:36.389161  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:36.392244  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.668279ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.393925  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (10.817443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.401468  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:36.401704  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:36.404012  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (9.387885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.405574  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.416215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.409712  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:36.409755  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:36.412398  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.386132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.414394  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (9.147287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.417726  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:36.417767  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:36.420557  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.964201ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.420719  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (5.943128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.424539  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:36.424587  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:36.428817  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.961606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.433028  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (11.881081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.437781  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:36.437887  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:36.443224  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (9.742655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.444580  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (6.089614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.449421  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:36.449582  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:36.451714  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (7.333363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.452976  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.663348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.457622  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:36.457676  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:36.459197  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (6.659705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.462045  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.845875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.464578  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:36.464636  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:36.467311  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (7.509142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.472922  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:36.472976  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:36.474489  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (9.391092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.475714  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (7.444461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.479796  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.400512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.482420  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:36.482516  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:36.484322  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (6.965153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.484778  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.885812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.490363  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:36.490440  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:36.493571  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.880471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.493640  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (8.653195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.497163  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:36.497209  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:36.500585  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (6.441881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.500591  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.041397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.505134  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:36.505194  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:36.508758  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (7.673386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.511088  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (5.174933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.512034  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26
I0515 22:49:36.512086  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26
I0515 22:49:36.513789  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (4.638293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.514155  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.746472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.517460  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:36.517524  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:36.519723  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.948784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.519824  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (5.617267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.523017  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:36.523258  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:36.524766  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (4.588083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.526247  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.418842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.528394  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:36.528537  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:36.531250  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (6.044122ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.531438  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.557404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.534643  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:36.534692  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:36.536970  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (5.219848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.537080  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.987445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.540199  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:36.540259  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:36.541903  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (4.446805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.542882  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.736971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.546621  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:36.546665  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:36.550073  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.140754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.550151  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (7.864432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.554864  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:36.554918  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:36.557705  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.134668ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.557893  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (7.352396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.562592  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:36.562647  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:36.566124  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (6.901528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.569332  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (6.303326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.571300  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:36.571517  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:36.574641  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.494632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.575393  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (8.560372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.579105  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:36.579153  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:36.581247  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (4.9389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.582264  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.8159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.585874  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37
I0515 22:49:36.586022  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37
I0515 22:49:36.591955  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (10.22132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.592068  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (5.043318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.596621  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:36.596680  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:36.597761  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (4.915211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.600064  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.691839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.603790  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:36.604867  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (5.633315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.605936  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:36.610297  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.985986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.611603  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:36.611661  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:36.614127  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.157095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.615242  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (10.003718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.619168  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:36.619207  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:36.621594  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.954223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.623713  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (7.905663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.630307  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:36.630512  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:36.633072  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (8.812744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.633743  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.657514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.645146  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:36.645300  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:36.649134  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (8.318661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.651115  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (5.218834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.652882  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:36.652987  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:36.655106  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.658419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.655753  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (5.963177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.659410  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:36.659613  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:36.661157  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (4.934066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.662534  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.50859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.665278  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46
I0515 22:49:36.665354  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46
I0515 22:49:36.669109  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (7.492633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.673984  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47
I0515 22:49:36.674040  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47
I0515 22:49:36.676028  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (6.461674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.679406  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (13.461846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.682710  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.608402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.683071  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (6.376779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.684509  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48
I0515 22:49:36.684616  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48
I0515 22:49:36.687904  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:36.687945  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:36.688976  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.98173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.689593  109070 cacher.go:739] cacher (*core.Event): 1 objects queued in incoming channel.
I0515 22:49:36.690274  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (6.690547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.692856  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.25862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.693317  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0: (2.05311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.700603  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-1: (6.727446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.707878  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (6.771148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.710912  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (1.361287ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.713850  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (1.282266ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.717021  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (1.35966ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.720299  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (1.546796ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.723292  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (1.301591ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.726938  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (1.835916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.730624  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (1.865547ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.733673  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (1.42024ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.737091  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (1.696597ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.740244  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (1.498404ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.744359  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (2.391939ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.748817  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (2.576037ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.755244  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (4.328137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.758336  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (1.351856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.761557  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (1.45654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.767524  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (3.338107ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.770764  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (1.038319ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.773606  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (1.252373ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.776562  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (1.230911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.779791  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (1.602816ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.782802  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (1.391295ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.787573  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (1.900259ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.793349  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (3.689592ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.796675  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (1.487999ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.800036  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (1.494223ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.802849  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (1.138792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.809650  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (5.085712ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.812705  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (1.313131ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.815741  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (1.325743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.819455  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (1.988996ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.822893  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (1.249879ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.826484  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (1.684239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.831381  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (2.833695ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.836132  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (2.790537ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.839945  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (1.927199ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.843733  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (1.997599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.849565  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (3.526451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.852892  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (1.29599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.856830  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (2.054199ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.864848  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (4.548561ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.868790  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:36.868931  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:36.869628  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (2.450283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.869671  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:36.872390  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (1.189459ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.872401  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:36.873889  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:36.875698  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (1.539238ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.879610  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (1.681921ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.884340  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (3.040721ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.889262  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (3.09645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.892901  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (1.541611ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.895804  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (1.199057ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.898732  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (980.004µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.902364  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (1.276476ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.909637  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0: (5.437979ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.912620  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-1: (1.238822ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.915478  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.337321ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.918436  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.394226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.919031  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0
I0515 22:49:36.919052  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0
I0515 22:49:36.919180  109070 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0", node "node1"
I0515 22:49:36.919197  109070 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0", node "node1": all PVCs bound and nothing to do
I0515 22:49:36.919248  109070 factory.go:711] Attempting to bind rpod-0 to node1
I0515 22:49:36.921110  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0/binding: (1.478855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.921897  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.941753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.922215  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1
I0515 22:49:36.922243  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1
I0515 22:49:36.922414  109070 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1", node "node1"
I0515 22:49:36.922440  109070 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1", node "node1": all PVCs bound and nothing to do
I0515 22:49:36.922539  109070 factory.go:711] Attempting to bind rpod-1 to node1
I0515 22:49:36.922946  109070 scheduler.go:589] pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0515 22:49:36.924378  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-1/binding: (1.585192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:36.924752  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.439704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:36.924926  109070 scheduler.go:589] pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0515 22:49:36.926781  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.386576ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:37.027076  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0: (1.974239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:37.130263  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-1: (2.261723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:37.130779  109070 preemption_test.go:561] Creating the preemptor pod...
I0515 22:49:37.133792  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod
I0515 22:49:37.133811  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod
I0515 22:49:37.133950  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.134009  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.137986  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.307631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50162]
I0515 22:49:37.140811  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (6.227239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50160]
I0515 22:49:37.141139  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod/status: (6.859913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
E0515 22:49:37.142034  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.143052  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (11.949889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50104]
I0515 22:49:37.143221  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.222911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50092]
I0515 22:49:37.143325  109070 preemption_test.go:567] Creating additional pods...
I0515 22:49:37.143586  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0515 22:49:37.143706  109070 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0515 22:49:37.143718  109070 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0515 22:49:37.151361  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (7.719623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50160]
I0515 22:49:37.151706  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod/status: (7.398071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50162]
I0515 22:49:37.154829  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.423565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50160]
I0515 22:49:37.159123  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.093203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50160]
I0515 22:49:37.164095  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0: (11.933935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50162]
I0515 22:49:37.164407  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.500087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50160]
I0515 22:49:37.165390  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod
I0515 22:49:37.165407  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod
I0515 22:49:37.165589  109070 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod", node "node1"
I0515 22:49:37.165609  109070 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0515 22:49:37.165659  109070 factory.go:711] Attempting to bind preemptor-pod to node1
I0515 22:49:37.165728  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:37.165746  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:37.165858  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.165900  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.167341  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.915799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50160]
I0515 22:49:37.167565  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.167276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50162]
I0515 22:49:37.170488  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.297179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50206]
I0515 22:49:37.170594  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (3.865674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50202]
I0515 22:49:37.170974  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod/binding: (4.274798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50200]
I0515 22:49:37.171281  109070 scheduler.go:589] pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0515 22:49:37.171949  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0/status: (4.075043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50162]
I0515 22:49:37.172401  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.453901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50160]
I0515 22:49:37.172989  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.446642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50206]
I0515 22:49:37.174249  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (1.01877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50162]
I0515 22:49:37.174550  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.174715  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:37.174731  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:37.174812  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.608402ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50160]
I0515 22:49:37.174823  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.174868  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.176355  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (1.153268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50204]
I0515 22:49:37.176853  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.425629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50210]
I0515 22:49:37.177151  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.608347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50212]
I0515 22:49:37.177612  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1/status: (2.486901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50206]
E0515 22:49:37.178046  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.179045  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.447654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50210]
I0515 22:49:37.179096  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (908.37µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50206]
I0515 22:49:37.179349  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.179518  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:37.179535  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:37.179615  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.179655  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.181739  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (1.616959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50214]
I0515 22:49:37.182080  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.903136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50216]
I0515 22:49:37.182482  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.89124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50206]
I0515 22:49:37.182705  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2/status: (2.825066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50204]
E0515 22:49:37.183676  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.185474  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.340674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50214]
I0515 22:49:37.185845  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (1.732777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50216]
I0515 22:49:37.186221  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.186813  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:37.186877  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:37.187007  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.187056  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.188549  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.42471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50216]
I0515 22:49:37.189614  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (1.384097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50218]
I0515 22:49:37.189832  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3/status: (1.98689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50214]
I0515 22:49:37.191356  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (1.092819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50214]
I0515 22:49:37.191648  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.192018  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:37.192043  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:37.192269  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.192330  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.192341  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.034094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50220]
I0515 22:49:37.198090  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (5.182926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50222]
I0515 22:49:37.198544  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4/status: (5.935958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50218]
I0515 22:49:37.198807  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (5.832246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50220]
I0515 22:49:37.199051  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (6.576396ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50214]
I0515 22:49:37.203138  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (2.426042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50214]
I0515 22:49:37.203664  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.203863  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:37.203886  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:37.203978  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.204030  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.217971  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (12.685165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50224]
I0515 22:49:37.218351  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (12.32126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50244]
I0515 22:49:37.218672  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5/status: (14.400004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50214]
I0515 22:49:37.217980  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (13.078379ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50242]
I0515 22:49:37.223255  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.388349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50244]
I0515 22:49:37.236608  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (12.800905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50242]
I0515 22:49:37.236965  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.237371  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (13.258749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50244]
I0515 22:49:37.237524  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:37.237539  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:37.237625  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.237670  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.243141  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.385993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50244]
I0515 22:49:37.243543  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6/status: (3.705821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50242]
I0515 22:49:37.243915  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.559409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.244911  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (5.116412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50224]
E0515 22:49:37.245265  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.245780  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (1.57228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50242]
I0515 22:49:37.246101  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.246324  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:37.246348  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:37.246538  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.246602  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.259713  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (15.815033ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50244]
I0515 22:49:37.261973  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (3.117682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50224]
I0515 22:49:37.262391  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.593698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.262943  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7/status: (3.961156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50272]
I0515 22:49:37.263376  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.145357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50244]
I0515 22:49:37.264377  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (992.332µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50272]
I0515 22:49:37.264653  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.265234  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.412777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50244]
I0515 22:49:37.265708  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:37.265729  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
E0515 22:49:37.265734  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.265822  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.265862  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.268434  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.762258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50224]
I0515 22:49:37.268593  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (1.999057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.268751  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (2.096442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50280]
I0515 22:49:37.269088  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.269430  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-1.159efce3f891e2bd: (2.229786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50284]
I0515 22:49:37.269655  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:37.269696  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:37.269780  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.269826  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.271755  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (1.392688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50280]
I0515 22:49:37.272122  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8/status: (2.074938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.274724  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (5.753636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50224]
I0515 22:49:37.275817  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (1.339369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50280]
I0515 22:49:37.276056  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.276280  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:37.276296  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:37.276400  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.276455  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.276824  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.532163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.279408  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.847554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.280762  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (3.884773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50224]
I0515 22:49:37.280788  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9/status: (3.706758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50280]
I0515 22:49:37.287189  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (1.373912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50224]
I0515 22:49:37.287457  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.287656  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:37.287683  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:37.287791  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (9.286098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50290]
I0515 22:49:37.287785  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.288076  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.295288  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-2.159efce3f8daec8c: (5.927286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.295313  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (4.613778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50304]
I0515 22:49:37.295624  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (6.126037ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50302]
I0515 22:49:37.295677  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.295705  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (7.443558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50224]
I0515 22:49:37.296179  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:37.296209  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:37.296323  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.296367  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.297618  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.491856ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50304]
I0515 22:49:37.298600  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (1.69621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50308]
I0515 22:49:37.298680  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10/status: (1.875448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
E0515 22:49:37.299325  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.299120  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.116621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50310]
I0515 22:49:37.299843  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.535732ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50304]
I0515 22:49:37.301904  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.36836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50310]
I0515 22:49:37.302312  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (2.581268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.302599  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.302822  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:37.302845  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:37.302937  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.303051  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.305618  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.885332ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50318]
I0515 22:49:37.307431  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (3.988759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50308]
I0515 22:49:37.307750  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11/status: (3.436448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
E0515 22:49:37.308120  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.308663  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.617643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50318]
I0515 22:49:37.309681  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (1.596272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.309907  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.310067  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:37.310090  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:37.310165  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.310216  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.335934  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (24.499312ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50328]
I0515 22:49:37.336153  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (27.102686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50318]
I0515 22:49:37.336163  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12/status: (24.9878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.336529  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (25.377859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50316]
I0515 22:49:37.341411  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (3.823922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.341718  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.341892  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:37.341907  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:37.342008  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.342051  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.342109  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (5.33136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50328]
I0515 22:49:37.344350  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13/status: (1.932515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.345482  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.763137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50328]
I0515 22:49:37.346332  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (1.611035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50268]
I0515 22:49:37.346636  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (4.364295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50316]
E0515 22:49:37.346874  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.347372  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.545123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50350]
I0515 22:49:37.347572  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.347793  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:37.347807  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:37.347897  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.347933  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.350820  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (1.569872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50352]
I0515 22:49:37.351003  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14/status: (2.291858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50328]
I0515 22:49:37.354333  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (5.589194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50354]
I0515 22:49:37.355944  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (4.527757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50328]
I0515 22:49:37.356206  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (6.704888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50316]
I0515 22:49:37.356720  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.357515  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:37.357544  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:37.357657  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.357710  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.358802  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.926999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50354]
I0515 22:49:37.360692  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.898657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50356]
I0515 22:49:37.361609  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (2.947108ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50352]
I0515 22:49:37.361615  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.417188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50354]
I0515 22:49:37.361832  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15/status: (3.877842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50328]
I0515 22:49:37.363538  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (1.279054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50352]
I0515 22:49:37.363763  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.363922  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:37.363981  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:37.364135  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.364233  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.364323  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.882949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50356]
I0515 22:49:37.366436  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (1.299285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50356]
I0515 22:49:37.367673  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (1.670806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50352]
I0515 22:49:37.367932  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.368087  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:37.368101  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:37.368177  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.368212  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.369198  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.074294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50358]
I0515 22:49:37.370087  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (1.303153ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0515 22:49:37.371043  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.356158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50358]
I0515 22:49:37.371059  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16/status: (2.587181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50352]
I0515 22:49:37.373366  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (1.827621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50352]
I0515 22:49:37.373665  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.013988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0515 22:49:37.373861  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.374120  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:37.374145  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:37.374230  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.374265  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-6.159efce3fc502028: (8.998083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50360]
I0515 22:49:37.374278  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.375619  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.558089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0515 22:49:37.377628  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (2.879264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50352]
I0515 22:49:37.377795  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.793758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0515 22:49:37.379480  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17/status: (2.898119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0515 22:49:37.379873  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.682989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50362]
I0515 22:49:37.382147  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (996.757µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0515 22:49:37.382422  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.068653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50352]
I0515 22:49:37.382601  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.383057  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:37.383083  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:37.383185  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.383230  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.385302  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (1.739216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50356]
I0515 22:49:37.385566  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (10.829747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50360]
I0515 22:49:37.385623  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.575713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0515 22:49:37.385704  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.385835  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:37.385848  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:37.385918  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.386030  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.387959  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.992998ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50360]
I0515 22:49:37.389026  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (2.137451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50368]
I0515 22:49:37.389745  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18/status: (3.517572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0515 22:49:37.390075  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (4.856301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50366]
I0515 22:49:37.391717  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (910.679µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0515 22:49:37.392069  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.392305  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:37.392322  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:37.392371  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-7.159efce3fcd86dae: (3.564697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50360]
I0515 22:49:37.392391  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.392427  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.394394  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (1.569877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50368]
I0515 22:49:37.394593  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19/status: (1.925917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0515 22:49:37.395139  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (8.781235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50356]
I0515 22:49:37.396310  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.741684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50370]
I0515 22:49:37.396588  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (1.24611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50364]
I0515 22:49:37.396824  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.397335  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:37.397352  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:37.397535  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.397599  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.398768  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.873507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50370]
I0515 22:49:37.400899  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20/status: (2.604403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50368]
I0515 22:49:37.401129  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (2.778839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50372]
I0515 22:49:37.401150  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.591307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50370]
I0515 22:49:37.401270  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.670365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50356]
E0515 22:49:37.402262  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.404345  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (2.045796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50372]
I0515 22:49:37.404395  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.010601ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50368]
I0515 22:49:37.405036  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.405852  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:37.405964  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:37.406067  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.406128  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.408850  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.525869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50368]
I0515 22:49:37.409678  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21/status: (2.743353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50374]
I0515 22:49:37.409993  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (3.578435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50372]
I0515 22:49:37.411896  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (4.681394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50376]
I0515 22:49:37.413617  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (2.992259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50374]
I0515 22:49:37.414097  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.739366ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50368]
I0515 22:49:37.414757  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.415153  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:37.415227  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:37.415390  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.415582  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.418394  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.026011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50376]
I0515 22:49:37.418717  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (1.866598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50378]
I0515 22:49:37.421482  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (4.611602ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50380]
I0515 22:49:37.421652  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22/status: (5.489226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50372]
I0515 22:49:37.424156  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (1.618922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50380]
I0515 22:49:37.424620  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.424798  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (5.562528ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50378]
I0515 22:49:37.425104  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:37.425289  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:37.425574  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.425711  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.428457  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23/status: (2.421779ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50376]
I0515 22:49:37.428539  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.156018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0515 22:49:37.431235  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (1.811339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50380]
I0515 22:49:37.432003  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (1.094706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50376]
E0515 22:49:37.432159  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.432437  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.432678  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:37.432713  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:37.432795  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.432838  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.435253  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.925801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0515 22:49:37.436157  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (1.340261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50384]
I0515 22:49:37.438542  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24/status: (5.270521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50380]
I0515 22:49:37.440402  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (1.391322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50384]
I0515 22:49:37.440774  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.441157  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:37.441278  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:37.441595  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.441765  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.443741  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (1.744418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50384]
I0515 22:49:37.444839  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.445349  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:37.445414  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:37.445620  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.446241  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.446652  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (1.998913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50384]
I0515 22:49:37.447867  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-10.159efce3ffcfcc93: (3.855224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0515 22:49:37.448710  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (1.397706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50384]
I0515 22:49:37.450692  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.716051ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50382]
I0515 22:49:37.453910  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25/status: (6.815712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50386]
I0515 22:49:37.456688  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (1.397794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50384]
I0515 22:49:37.457029  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.457321  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26
I0515 22:49:37.457351  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26
I0515 22:49:37.457481  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.457550  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.462086  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.447562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.462423  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26/status: (3.802302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50384]
I0515 22:49:37.462525  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (4.259217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50388]
E0515 22:49:37.464439  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.466215  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (2.116659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50388]
I0515 22:49:37.466699  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.467288  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:37.467374  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:37.467576  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.467690  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.472742  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-11.159efce40035af1e: (4.050938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50392]
I0515 22:49:37.473262  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (2.678159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.474017  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (5.619594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50388]
I0515 22:49:37.474864  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.475032  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:37.475080  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:37.475345  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.475606  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.479570  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27/status: (3.171508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.486243  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (10.114391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50392]
E0515 22:49:37.494619  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.495059  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (12.08492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.495513  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (18.403954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50394]
I0515 22:49:37.495935  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.496195  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:37.496405  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:37.496743  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.499540  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.503937  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (6.242835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.504669  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (6.853686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50392]
I0515 22:49:37.506897  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28/status: (5.119998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50396]
I0515 22:49:37.510399  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (2.059053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.510917  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.511360  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:37.511561  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:37.511842  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.511950  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.517987  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29/status: (4.177152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50398]
I0515 22:49:37.518077  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (4.866472ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50392]
I0515 22:49:37.520233  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (4.901968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
E0515 22:49:37.520736  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.521304  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (1.234396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50398]
I0515 22:49:37.521787  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.522111  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:37.522168  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:37.522309  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.522398  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.524459  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (1.258996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.525654  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (2.978077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50392]
I0515 22:49:37.527233  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.125546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.527824  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.528105  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:37.528166  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:37.528326  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.528439  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.530310  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (1.469418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50392]
I0515 22:49:37.533907  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30/status: (2.879669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50392]
I0515 22:49:37.535825  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (1.233161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50392]
I0515 22:49:37.536302  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-13.159efce40288dfde: (3.965077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.536383  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.536594  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:37.536612  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:37.536718  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.536767  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.538566  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (1.155011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50400]
I0515 22:49:37.539972  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.223012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.540606  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31/status: (3.397829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50392]
I0515 22:49:37.543035  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (1.969899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.543457  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.544082  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:37.544157  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:37.544292  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.544377  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.546744  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.699449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.546920  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (1.623353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50400]
I0515 22:49:37.550780  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32/status: (3.086867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50400]
I0515 22:49:37.551736  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.689424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.552868  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (1.637002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50400]
I0515 22:49:37.553105  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.553383  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:37.553405  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:37.553551  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.553595  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.557095  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.621327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50402]
I0515 22:49:37.558739  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (4.090595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.558798  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33/status: (4.670882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50400]
I0515 22:49:37.560330  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (1.038802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.560677  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.560846  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:37.560866  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:37.561017  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.561065  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.563206  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34/status: (1.905498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.563520  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (1.785944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50402]
I0515 22:49:37.563936  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.767908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50404]
I0515 22:49:37.565574  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (1.570279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50402]
I0515 22:49:37.565972  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.566165  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:37.566188  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:37.566275  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.566316  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.568438  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35/status: (1.85326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50404]
I0515 22:49:37.568716  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.022003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.570302  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (1.241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50404]
E0515 22:49:37.570647  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.570753  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (1.176299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50390]
I0515 22:49:37.571092  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.571367  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:37.571391  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:37.571531  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.571586  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.574256  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.304804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50406]
I0515 22:49:37.575258  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36/status: (3.441403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50404]
I0515 22:49:37.575621  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (2.256865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50408]
I0515 22:49:37.577994  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (1.149335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50404]
I0515 22:49:37.578272  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.578554  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37
I0515 22:49:37.578727  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37
I0515 22:49:37.578983  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
E0515 22:49:37.579001  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.579246  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.581164  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (1.460133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50406]
I0515 22:49:37.583019  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37/status: (3.523645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50404]
I0515 22:49:37.585931  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.983072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50406]
I0515 22:49:37.588682  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (2.130065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50404]
I0515 22:49:37.589395  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.590113  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:37.590215  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:37.590430  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.590770  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.595905  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (3.764324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.597703  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38/status: (5.827938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50404]
I0515 22:49:37.602568  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.785759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.602798  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (4.121472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50404]
I0515 22:49:37.603315  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.603693  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:37.603716  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:37.603871  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.603916  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.605626  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (1.297417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.608886  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.199526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50412]
I0515 22:49:37.610401  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39/status: (2.383366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50404]
I0515 22:49:37.611967  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (1.045461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50412]
I0515 22:49:37.612192  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.613409  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:37.613441  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:37.613684  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.613773  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.619133  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (3.09368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.619615  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40/status: (2.753753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50412]
I0515 22:49:37.621355  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.774153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.622795  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (2.792511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50412]
I0515 22:49:37.623139  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.623571  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:37.623638  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:37.623935  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.624039  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.629196  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.954871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50416]
I0515 22:49:37.629529  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41/status: (2.322172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50412]
I0515 22:49:37.630617  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (3.79311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.631687  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (3.21148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50418]
I0515 22:49:37.631813  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (1.377115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50416]
I0515 22:49:37.631947  109070 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0515 22:49:37.632699  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.633833  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (1.611307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.633916  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:37.633934  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:37.634168  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.634260  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.637011  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.570391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50422]
I0515 22:49:37.639134  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (4.154089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50420]
I0515 22:49:37.639688  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42/status: (4.556481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50414]
I0515 22:49:37.640264  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (5.992906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.643005  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (1.38086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.643531  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (3.348737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50420]
I0515 22:49:37.643855  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.644325  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:37.644387  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:37.644580  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.644739  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.648955  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.132565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50424]
I0515 22:49:37.649792  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (6.414286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.651009  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43/status: (2.914786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50420]
I0515 22:49:37.651322  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (3.763597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50422]
I0515 22:49:37.652862  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (1.662185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.653864  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (1.313792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50422]
I0515 22:49:37.654314  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.654627  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:37.654688  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:37.655005  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.655126  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.654749  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (1.4658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.661801  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44/status: (3.001018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50424]
I0515 22:49:37.662561  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (1.62507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50422]
I0515 22:49:37.663491  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (2.905957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.666758  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (7.970489ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50426]
I0515 22:49:37.667235  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (2.642575ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50422]
I0515 22:49:37.667839  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (5.624047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50424]
I0515 22:49:37.668154  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.668309  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:37.668434  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:37.668717  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.668772  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.670598  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (1.304721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.670906  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (1.95703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50424]
I0515 22:49:37.671273  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.671749  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:37.671772  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:37.671855  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.671916  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.672414  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (4.604325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50426]
I0515 22:49:37.675392  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (2.423161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50426]
I0515 22:49:37.675729  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45/status: (3.165496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.676028  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (3.692841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50424]
I0515 22:49:37.676400  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-20.159efce405d8692a: (5.229615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50428]
E0515 22:49:37.676782  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.677849  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (1.462707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50426]
I0515 22:49:37.679527  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.159567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50424]
I0515 22:49:37.680895  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (3.537971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.681854  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (1.541518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50426]
I0515 22:49:37.681983  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.682901  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46
I0515 22:49:37.682928  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46
I0515 22:49:37.683735  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.684034  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.685558  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (2.635474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.688689  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (2.373533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.690048  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46/status: (4.977225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50424]
I0515 22:49:37.692420  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (1.785416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50424]
I0515 22:49:37.692431  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (5.686714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50432]
I0515 22:49:37.692845  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.693379  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47
I0515 22:49:37.693402  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47
I0515 22:49:37.693530  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.693586  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.696287  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (1.973022ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50434]
I0515 22:49:37.696711  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (3.322315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50424]
I0515 22:49:37.699248  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47/status: (5.011103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.703406  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (5.583433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50430]
I0515 22:49:37.703643  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (4.88758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50424]
I0515 22:49:37.707214  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (5.109201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50410]
I0515 22:49:37.707310  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.974995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50430]
I0515 22:49:37.707689  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (2.895934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50424]
I0515 22:49:37.708044  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.708614  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48
I0515 22:49:37.708631  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48
I0515 22:49:37.708737  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.708779  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.714105  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48/status: (2.829499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50434]
I0515 22:49:37.714523  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (1.870373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50436]
I0515 22:49:37.714757  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (2.703168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50430]
I0515 22:49:37.716092  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (1.450146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50434]
I0515 22:49:37.716526  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.716883  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:37.716916  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:37.717030  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.717084  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.720116  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (9.381206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0515 22:49:37.721815  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (1.955316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50436]
E0515 22:49:37.722378  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:37.722996  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.228556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0515 22:49:37.723546  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49/status: (6.245049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50434]
I0515 22:49:37.725670  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (1.608255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0515 22:49:37.727523  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.727654  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (12.06318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50430]
I0515 22:49:37.727733  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:37.727761  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:37.727886  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.727944  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.731755  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (3.149249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50436]
I0515 22:49:37.731844  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-23.159efce407854f6b: (2.961528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50440]
I0515 22:49:37.732916  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (4.74233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0515 22:49:37.733218  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.733411  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26
I0515 22:49:37.733428  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26
I0515 22:49:37.733571  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.733628  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.736098  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (2.971998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50436]
I0515 22:49:37.736254  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (3.279048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.736919  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (2.8899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50438]
I0515 22:49:37.737262  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-26.159efce4096b3628: (2.710918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50444]
I0515 22:49:37.736601  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (2.51893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50440]
I0515 22:49:37.737966  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.738240  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:37.738263  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:37.738364  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.738412  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.739741  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (1.111082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50444]
I0515 22:49:37.739749  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (2.182503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50436]
I0515 22:49:37.740148  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.741784  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (1.525283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50444]
I0515 22:49:37.741960  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-27.159efce40a7cfac8: (2.292397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50446]
I0515 22:49:37.742804  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:37.742872  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:37.743066  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.743197  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.745184  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (2.788539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50444]
I0515 22:49:37.748748  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-29.159efce40ca93edf: (3.497429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50450]
I0515 22:49:37.749469  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (4.608623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50448]
I0515 22:49:37.749752  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.749815  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (6.388019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50436]
I0515 22:49:37.750746  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:37.751336  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:37.751673  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.751888  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.752008  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (6.150955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50444]
I0515 22:49:37.751201  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (10.830146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.765184  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (2.502244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50444]
I0515 22:49:37.765617  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (12.932148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50450]
I0515 22:49:37.765953  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (12.818901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50448]
I0515 22:49:37.766299  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.766595  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:37.766670  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:37.766771  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.766845  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.768567  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (1.691184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50450]
I0515 22:49:37.769019  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (1.962471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.769097  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (1.792626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50454]
I0515 22:49:37.769922  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.770045  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:37.770059  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:37.770170  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.770217  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.771168  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (1.524858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.771748  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-35.159efce40fe6e49f: (17.995247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.772288  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (1.212485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50456]
I0515 22:49:37.773375  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (2.504327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50450]
I0515 22:49:37.773868  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.774358  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:37.774387  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:37.774547  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:37.774600  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:37.775173  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (1.56087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.778170  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (1.807961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.778198  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (3.085001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50458]
I0515 22:49:37.778590  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-36.159efce410374bcf: (4.824254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50456]
I0515 22:49:37.780598  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (5.682082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.780969  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:37.782194  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-45.159efce41631f3ca: (2.060145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.783153  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (4.270137ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50458]
I0515 22:49:37.784970  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (1.345783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50458]
I0515 22:49:37.785994  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-49.159efce418e365ca: (2.570054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.787398  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (1.672766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50458]
I0515 22:49:37.789999  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (2.012018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.791705  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (1.274676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.793549  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (1.371879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.795322  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (1.196188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.797824  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (1.261992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.799472  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (1.093424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.802224  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (2.078178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.817974  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (1.881827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.819798  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (1.315498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.821529  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (1.288429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.824439  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (1.781374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.826163  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (1.173382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.828287  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (1.40424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.830589  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (1.485209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.832725  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (1.479585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.834691  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (1.366361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.837807  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (2.507008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.838155  109070 preemption_test.go:598] Cleaning up all pods...
I0515 22:49:37.842759  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:37.842817  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:37.844135  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (5.707794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.848525  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:37.848569  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:37.851431  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (6.763859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.853084  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (9.896952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.863352  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (9.629309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.864263  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:37.864375  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:37.864863  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (11.768886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.867945  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.705079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.869308  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:37.869420  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:37.870541  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:37.872695  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:37.874815  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:37.877950  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:37.878009  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:37.879414  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (13.786895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.880617  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.242337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.887155  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:37.887211  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:37.888815  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (8.54333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.891318  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.695661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.894220  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:37.894262  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:37.895753  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (6.380519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.897024  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.253239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.913143  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:37.913196  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:37.914670  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (18.599087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.917364  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.6633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.919233  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:37.919278  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:37.922015  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (6.706782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.923432  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.811469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.926664  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:37.926702  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:37.929257  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.324569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.931243  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (7.643747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.936730  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (4.703085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.938470  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:37.938530  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:37.941753  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.896089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.945536  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (8.330569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.946162  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:37.946266  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:37.948845  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.782867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.950794  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (4.726128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.952395  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:37.952479  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:37.956488  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:37.956589  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:37.956540  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.606527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.959859  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (8.462008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.960810  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.023244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.964456  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:37.964552  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:37.966928  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.78799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.967249  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (6.888764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.971196  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:37.971294  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:37.973053  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (5.193004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.973103  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.445948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.977101  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:37.977198  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:37.978723  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (5.230726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.980270  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.394174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.983534  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:37.983623  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:37.984596  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (4.325419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.986108  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.015955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.989166  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:37.989216  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:37.991049  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.556531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:37.991823  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (6.178424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:37.998792  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:37.998885  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:38.000787  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (8.52248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.005099  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (4.733627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.006038  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:38.006112  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:38.006674  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (5.326083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.011155  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:38.011239  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:38.012116  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (5.125912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.014683  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (7.983618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.017981  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.42514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.019460  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:38.019593  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:38.021672  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (7.70499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.025677  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:38.025723  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:38.027426  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (5.09557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.029418  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.612435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.032016  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.993639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.033027  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:38.033070  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:38.035872  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (7.140047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.037230  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.383464ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.041627  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (4.784243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.042088  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:38.042143  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:38.044606  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.980251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.104559  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:38.104689  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:38.108169  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (65.27771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.124905  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (16.132183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.137961  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (12.566688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.156699  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (5.606742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.159681  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:38.170945  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:38.160119  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (21.585127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.176822  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:38.176868  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:38.179514  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (7.510264ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.182800  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.52712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.183825  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (12.412582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.187800  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:38.187850  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:38.190191  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (5.90201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.195851  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (7.570543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.198101  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:38.198143  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:38.201095  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.697772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.203017  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (12.424443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.208462  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:38.208544  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:38.212578  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.251837ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.213994  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (10.416568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.226166  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:38.226231  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:38.230975  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (4.385426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.230975  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (16.257604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.235026  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:38.235142  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:38.237381  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (5.692443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.240125  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (4.348509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.249894  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:38.249961  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:38.253546  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.021281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.254716  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (16.815496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.264056  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:38.264120  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:38.266713  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (11.548114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.273907  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (6.558701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.276620  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.910276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.279841  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:38.279895  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:38.282741  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.360713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.284332  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (9.841274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.289571  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (4.802004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.291335  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:38.291381  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:38.293707  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.960237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.295476  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:38.295532  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:38.298830  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.026095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.301203  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (11.264802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.305941  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:38.305986  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:38.308660  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.364387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.310862  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (8.896018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.314613  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:38.314732  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:38.317481  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.297738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.318011  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (6.503042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.321951  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:38.321987  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:38.324275  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.033579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.327875  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (9.490745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.333161  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:38.333318  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:38.336637  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (8.237317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.341858  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.751002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.346277  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:38.346398  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:38.350050  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.118748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.351738  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (12.903646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.357058  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46
I0515 22:49:38.357106  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46
I0515 22:49:38.360651  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (8.199388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.361349  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.959204ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.366161  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47
I0515 22:49:38.366215  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47
I0515 22:49:38.367485  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (6.081271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.369935  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.368604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.375145  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (7.276068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.380428  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (4.632534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.388711  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0: (7.814643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.393660  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48
I0515 22:49:38.393801  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48
I0515 22:49:38.394084  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:38.394119  109070 scheduler.go:448] Skip schedule deleting pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:38.396925  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.67918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.397663  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-1: (8.335861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.399800  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.007232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.412659  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (14.539332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.416390  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (2.043981ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.419630  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (1.4658ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.422697  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (1.363455ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.425649  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (1.103535ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.428791  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (1.116639ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.431438  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (1.058951ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.434791  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (1.513765ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.437565  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (1.10654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.440205  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (1.083608ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.444539  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (1.399848ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.454012  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (1.478204ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.457090  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (1.437199ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.460224  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (1.404804ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.463914  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (1.366116ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.466914  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (1.356426ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.470264  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (1.834779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.473379  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (1.405166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.476372  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (1.336097ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.479570  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (1.312696ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.482381  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (1.162598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.485621  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (1.484632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.488620  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (1.365456ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.491595  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (1.308631ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.494431  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (1.222096ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.498067  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (1.887318ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.500857  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (1.159593ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.504012  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (1.459819ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.508219  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (1.725924ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.511174  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (1.326254ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.513993  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (1.174825ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.516867  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (1.316331ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.520672  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (2.123201ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.523890  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (1.441277ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.526759  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (1.092381ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.529572  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (1.16128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.532249  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (1.065635ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.535138  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (1.052683ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.538184  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (1.142479ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.541053  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (1.20146ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.543873  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (1.181261ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.547586  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (1.23347ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.550346  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (1.038783ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.553196  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (1.26143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.556048  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (1.098313ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.560133  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (2.340974ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.564333  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (2.265737ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.567572  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (1.304259ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.571325  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (1.718517ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.574182  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (1.081683ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.577307  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (1.34574ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.580428  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0: (1.165753ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.583976  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-1: (1.708253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.587311  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.334398ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.592862  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0
I0515 22:49:38.592957  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0
I0515 22:49:38.593164  109070 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0", node "node1"
I0515 22:49:38.593233  109070 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0", node "node1": all PVCs bound and nothing to do
I0515 22:49:38.593337  109070 factory.go:711] Attempting to bind rpod-0 to node1
I0515 22:49:38.593256  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.138088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.596692  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0/binding: (2.573178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.596921  109070 scheduler.go:589] pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0515 22:49:38.603746  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (6.565804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.605262  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (10.103094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.606139  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1
I0515 22:49:38.606284  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1
I0515 22:49:38.606548  109070 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1", node "node1"
I0515 22:49:38.606620  109070 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1", node "node1": all PVCs bound and nothing to do
I0515 22:49:38.606821  109070 factory.go:711] Attempting to bind rpod-1 to node1
I0515 22:49:38.610196  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-1/binding: (2.951917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.610544  109070 scheduler.go:589] pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0515 22:49:38.612929  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.039142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.709119  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0: (2.466241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.813704  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-1: (3.676847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.814408  109070 preemption_test.go:561] Creating the preemptor pod...
I0515 22:49:38.817683  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.823842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.818157  109070 preemption_test.go:567] Creating additional pods...
I0515 22:49:38.818225  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod
I0515 22:49:38.818306  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod
I0515 22:49:38.818570  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.819723  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.821090  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.438864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.823400  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.854672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.825441  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (2.69479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50442]
I0515 22:49:38.826057  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.007151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50472]
I0515 22:49:38.826320  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod/status: (2.414208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.828211  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.392539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.828485  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0515 22:49:38.828652  109070 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0515 22:49:38.828661  109070 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0515 22:49:38.828666  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.058336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50474]
I0515 22:49:38.830901  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.681182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50470]
I0515 22:49:38.830912  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod/status: (1.989706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.838060  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (6.551334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.839473  109070 wrap.go:47] DELETE /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/rpod-0: (8.049801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50470]
I0515 22:49:38.842801  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:38.842863  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:38.843125  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.843242  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.846810  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.184879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.846966  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (2.649726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50478]
I0515 22:49:38.851131  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0/status: (3.490856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.852657  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.353987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50478]
I0515 22:49:38.853693  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (10.212065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50470]
I0515 22:49:38.856226  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.897123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50478]
I0515 22:49:38.856335  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (3.346144ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50452]
I0515 22:49:38.856731  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.254626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50470]
I0515 22:49:38.857048  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.857277  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:38.857579  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:38.857754  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.857879  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.859253  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.289328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50476]
I0515 22:49:38.860177  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (1.545011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50478]
I0515 22:49:38.861230  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1/status: (2.575978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50470]
I0515 22:49:38.863959  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (4.282838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50480]
I0515 22:49:38.864345  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (2.700778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50470]
I0515 22:49:38.864764  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.494922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50478]
I0515 22:49:38.864835  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.865264  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:38.865366  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:38.865581  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.865687  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.871017  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.768762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50484]
I0515 22:49:38.871687  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:38.871801  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (4.368126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50482]
I0515 22:49:38.871958  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:38.871997  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:38.873052  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2/status: (6.848477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50476]
I0515 22:49:38.873090  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:38.873575  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (7.920931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50480]
I0515 22:49:38.875033  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:38.875281  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (1.516732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50482]
I0515 22:49:38.875790  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.876099  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:38.876202  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:38.876693  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.877296  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.877132  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.76862ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50480]
I0515 22:49:38.882335  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (4.167557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50484]
I0515 22:49:38.882382  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.521278ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50480]
I0515 22:49:38.886380  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.820172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50480]
I0515 22:49:38.891903  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (14.015592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50482]
I0515 22:49:38.898022  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (5.868089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50480]
I0515 22:49:38.898464  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3/status: (4.608753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50482]
I0515 22:49:38.900851  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (1.855701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50482]
I0515 22:49:38.902187  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.902580  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:38.902605  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:38.902719  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.902751  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.320871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50484]
I0515 22:49:38.902775  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.907436  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (2.889375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50486]
I0515 22:49:38.910394  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (4.075595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50484]
I0515 22:49:38.911927  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4/status: (7.30397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50482]
I0515 22:49:38.912216  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.97612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50490]
I0515 22:49:38.915058  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (1.156985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50488]
I0515 22:49:38.915487  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.915644  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.865222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50486]
I0515 22:49:38.915843  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:38.915868  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:38.915994  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.916055  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.919855  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (2.157572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50486]
I0515 22:49:38.920122  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.920327  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-0.159efce45c0256c3: (2.630202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50494]
I0515 22:49:38.920360  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:38.920385  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:38.920692  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.920753  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.922604  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (1.663248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50494]
I0515 22:49:38.923032  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (4.362897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50492]
I0515 22:49:38.923377  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (5.80548ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50488]
I0515 22:49:38.924117  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.924123  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-1.159efce45ce27d43: (2.628048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50486]
I0515 22:49:38.924670  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:38.924734  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:38.924863  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.924937  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.927569  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (2.334478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50492]
I0515 22:49:38.927850  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.928521  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (1.957149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50496]
I0515 22:49:38.928823  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.164492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50494]
I0515 22:49:38.929388  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:38.929975  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:38.930102  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.930156  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.931393  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-2.159efce45d596ed6: (4.065178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50498]
I0515 22:49:38.934851  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (5.77276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50496]
I0515 22:49:38.934867  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5/status: (4.256974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50492]
I0515 22:49:38.935187  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (5.50378ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50494]
I0515 22:49:38.935994  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (3.649148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50500]
I0515 22:49:38.938185  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (2.590796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50492]
I0515 22:49:38.938628  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.938906  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:38.938941  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:38.939192  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.939279  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.228801ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50494]
I0515 22:49:38.939294  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.942102  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6/status: (2.505874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50500]
I0515 22:49:38.942372  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (2.75541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50494]
I0515 22:49:38.942929  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.804855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50496]
I0515 22:49:38.945163  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (9.976932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50498]
I0515 22:49:38.946487  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.394581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50496]
I0515 22:49:38.947127  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (3.027886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50500]
I0515 22:49:38.949070  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.813391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50498]
I0515 22:49:38.950566  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.951292  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:38.951313  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:38.951424  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.951526  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.953375  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.995106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50496]
I0515 22:49:38.953852  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.653923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50504]
I0515 22:49:38.955485  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7/status: (3.277611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50494]
I0515 22:49:38.955862  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (2.487371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50502]
E0515 22:49:38.956843  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:38.957164  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.735314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50496]
I0515 22:49:38.957541  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (1.186589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50494]
I0515 22:49:38.957898  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.958303  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:38.958367  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:38.958574  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.958636  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.961085  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.240634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50502]
I0515 22:49:38.961391  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (2.067857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50504]
I0515 22:49:38.961812  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.672475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50508]
I0515 22:49:38.962386  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8/status: (2.146744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50506]
E0515 22:49:38.963251  109070 factory.go:686] pod is already present in the activeQ
E0515 22:49:38.964280  109070 factory.go:695] Error getting pod permit-plugin0bc52f0e-adde-4a94-9004-89ef0a977a19/test-pod for retry: Get http://127.0.0.1:32979/api/v1/namespaces/permit-plugin0bc52f0e-adde-4a94-9004-89ef0a977a19/pods/test-pod: dial tcp 127.0.0.1:32979: connect: connection refused; retrying...
I0515 22:49:38.966057  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.169642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50504]
I0515 22:49:38.966853  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (3.359092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50508]
I0515 22:49:38.967171  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.967419  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:38.967519  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:38.967679  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.967791  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.970412  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.776623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50504]
I0515 22:49:38.970944  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9/status: (2.862472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50508]
I0515 22:49:38.971845  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.601709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50502]
I0515 22:49:38.973214  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (3.178916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50514]
E0515 22:49:38.974564  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:38.974821  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.329506ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50508]
I0515 22:49:38.976585  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (4.584015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50504]
I0515 22:49:38.977145  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.977364  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:38.977382  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:38.977523  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.977572  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.978160  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.60504ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50502]
I0515 22:49:38.981259  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.690499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50502]
I0515 22:49:38.982816  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.216763ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50518]
I0515 22:49:38.985482  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.067239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50502]
I0515 22:49:38.985768  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (7.430005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50514]
I0515 22:49:38.987547  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (1.604611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50502]
I0515 22:49:38.990340  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10/status: (12.366847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50504]
I0515 22:49:38.991284  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.953887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50514]
I0515 22:49:38.993206  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (1.92897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50504]
I0515 22:49:38.995204  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.021632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50514]
I0515 22:49:38.995427  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:38.995806  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:38.996025  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:38.996189  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:38.996281  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:38.998724  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.805666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50504]
I0515 22:49:38.999385  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.245749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50522]
I0515 22:49:39.001051  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (3.490162ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50520]
I0515 22:49:39.001702  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11/status: (4.629206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50516]
I0515 22:49:39.002780  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.516657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50504]
I0515 22:49:39.003535  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (1.122306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50520]
I0515 22:49:39.003915  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.004156  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:39.004221  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:39.004421  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.004570  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.007171  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12/status: (2.265584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50520]
I0515 22:49:39.007348  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.06205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50504]
I0515 22:49:39.007621  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (2.755346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50522]
I0515 22:49:39.009307  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.377547ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50524]
I0515 22:49:39.009896  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (1.931551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50520]
I0515 22:49:39.010414  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.010939  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:39.010967  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:39.011160  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.011272  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.015459  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.032738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50526]
I0515 22:49:39.015921  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13/status: (4.169907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50524]
I0515 22:49:39.016295  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (3.982479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50522]
E0515 22:49:39.016663  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:39.018953  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (2.516465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50524]
I0515 22:49:39.019386  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.020005  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:39.020036  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:39.020337  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.020401  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.023991  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.600172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50524]
I0515 22:49:39.024286  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (6.964049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50522]
I0515 22:49:39.030149  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (7.74213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50526]
I0515 22:49:39.031222  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (6.378686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50522]
I0515 22:49:39.031543  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14/status: (2.868612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50524]
I0515 22:49:39.035566  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.191513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50522]
I0515 22:49:39.036705  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (2.959687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50524]
I0515 22:49:39.037951  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.038199  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:39.038225  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:39.038330  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.038388  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.039389  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.365405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50522]
I0515 22:49:39.042942  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.359339ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50528]
I0515 22:49:39.045064  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.204026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50522]
I0515 22:49:39.047177  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (2.458608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50526]
I0515 22:49:39.049700  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15/status: (2.835062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50524]
I0515 22:49:39.051918  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (1.725819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50526]
I0515 22:49:39.052182  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.052411  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:39.052466  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:39.052636  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.052712  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.056243  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.619743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50530]
I0515 22:49:39.056663  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (3.341742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50528]
I0515 22:49:39.061366  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (13.695237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50522]
I0515 22:49:39.062277  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16/status: (8.965009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50526]
I0515 22:49:39.064410  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (1.470418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50526]
I0515 22:49:39.064770  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.065059  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:39.065085  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:39.065196  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.065264  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.066341  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (3.668354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50528]
I0515 22:49:39.068711  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.829133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0515 22:49:39.069382  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (2.83457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50530]
I0515 22:49:39.069406  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17/status: (3.836413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50526]
I0515 22:49:39.071660  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (4.638261ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50528]
I0515 22:49:39.072719  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (2.635936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50530]
I0515 22:49:39.073012  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.073209  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:39.073262  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:39.073565  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.075820  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (1.742012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50530]
I0515 22:49:39.076800  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.077601  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.462928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0515 22:49:39.080778  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18/status: (2.819301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0515 22:49:39.082545  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (1.336429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0515 22:49:39.082781  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.083097  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:39.083161  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:39.083279  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.083332  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.085416  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (13.227444ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50528]
I0515 22:49:39.087030  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.303857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0515 22:49:39.088723  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (2.768349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50528]
I0515 22:49:39.092376  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19/status: (3.133227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50530]
I0515 22:49:39.102645  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods: (12.953268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50528]
I0515 22:49:39.103002  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (14.385361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
E0515 22:49:39.103338  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:39.103675  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (10.324176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50530]
I0515 22:49:39.103906  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.104541  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:39.104562  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:39.104721  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.104794  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.109552  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (3.907872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0515 22:49:39.110170  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20/status: (4.086974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50530]
I0515 22:49:39.110723  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (5.049205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50538]
I0515 22:49:39.113067  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (1.519421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50530]
I0515 22:49:39.113374  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.113609  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:39.113633  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:39.113743  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.113803  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.115685  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (1.076097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50538]
I0515 22:49:39.116123  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.48348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50540]
I0515 22:49:39.117298  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21/status: (3.220913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50532]
I0515 22:49:39.119055  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (1.254628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50540]
I0515 22:49:39.120070  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.120297  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:39.120324  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:39.120435  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.120541  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.122978  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22/status: (2.163783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50540]
I0515 22:49:39.123139  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (1.856155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50538]
I0515 22:49:39.124811  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (1.192896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50540]
I0515 22:49:39.125052  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.753273ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50542]
I0515 22:49:39.125065  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.125352  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:39.125377  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:39.125480  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.125571  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.129043  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23/status: (3.229361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50540]
I0515 22:49:39.129415  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (3.597971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50538]
E0515 22:49:39.130099  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:39.130282  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.975421ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0515 22:49:39.131335  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (1.480957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50540]
I0515 22:49:39.131678  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.131881  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:39.131941  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:39.132126  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.132286  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.133679  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (1.123366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0515 22:49:39.134539  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24/status: (1.890294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50538]
I0515 22:49:39.135219  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.99709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0515 22:49:39.136814  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (1.383966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50538]
I0515 22:49:39.137154  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.137378  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:39.137401  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:39.137558  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.137660  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.139423  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (1.54949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0515 22:49:39.139433  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (1.461394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0515 22:49:39.140195  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.140350  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:39.140374  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:39.140538  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.140605  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.141233  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-7.159efce46276e566: (2.584116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50548]
I0515 22:49:39.143041  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25/status: (2.029719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0515 22:49:39.143945  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (1.912611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
E0515 22:49:39.144573  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:39.144942  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (1.264419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0515 22:49:39.145874  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.146035  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26
I0515 22:49:39.146056  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26
I0515 22:49:39.146169  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.146225  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.146347  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.018277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50548]
I0515 22:49:39.148913  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.012048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0515 22:49:39.149363  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (2.695729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50548]
I0515 22:49:39.149384  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26/status: (2.935983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0515 22:49:39.150834  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (1.038306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0515 22:49:39.151132  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.151321  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:39.151341  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:39.151468  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.151553  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.152899  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (1.057859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0515 22:49:39.153363  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (1.232203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0515 22:49:39.154681  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.154971  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:39.155043  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:39.155254  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.155347  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.156837  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-8.159efce462e3f80e: (3.443738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0515 22:49:39.157601  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (1.677604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0515 22:49:39.159094  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27/status: (3.153898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50550]
I0515 22:49:39.160409  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.080765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0515 22:49:39.161244  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (1.218004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50550]
I0515 22:49:39.161608  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.161821  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:39.161846  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:39.162005  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.162074  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.164428  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (1.807935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0515 22:49:39.164649  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (2.006807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0515 22:49:39.164900  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.165196  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:39.165223  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:39.165371  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.165437  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.167167  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (1.345554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0515 22:49:39.167533  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-9.159efce4636f8e92: (3.687522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50552]
I0515 22:49:39.168230  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28/status: (2.384415ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0515 22:49:39.169945  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.379325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50552]
E0515 22:49:39.170116  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:39.171602  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (2.524574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50546]
I0515 22:49:39.172104  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.172437  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:39.172483  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:39.172732  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.172819  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.174944  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (1.758831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0515 22:49:39.176947  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.239757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50554]
I0515 22:49:39.177057  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29/status: (3.879009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50552]
I0515 22:49:39.178765  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (1.082956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50554]
I0515 22:49:39.179079  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.179260  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:39.179295  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:39.179466  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.179549  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.181913  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.611409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50556]
I0515 22:49:39.181931  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (1.624901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0515 22:49:39.182543  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30/status: (2.747835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50554]
I0515 22:49:39.184080  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (1.117762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
I0515 22:49:39.184350  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.184621  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:39.184645  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:39.184751  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.184803  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.186843  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.245443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50558]
I0515 22:49:39.188353  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (2.361724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50556]
I0515 22:49:39.188621  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31/status: (3.572766ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50544]
E0515 22:49:39.188957  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:39.190908  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (1.067381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50556]
I0515 22:49:39.191208  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.191433  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:39.191466  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:39.191610  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.191714  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.193318  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (1.012614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50558]
I0515 22:49:39.193949  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.497684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50560]
I0515 22:49:39.195621  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32/status: (3.259244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50556]
I0515 22:49:39.197434  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (1.243834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50560]
I0515 22:49:39.197806  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.198000  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:39.198021  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:39.198152  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.198213  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.200161  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (1.621536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50558]
I0515 22:49:39.200530  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.587979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50562]
I0515 22:49:39.201084  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33/status: (2.509369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50560]
I0515 22:49:39.202776  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (1.154111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50560]
I0515 22:49:39.203141  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.203295  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:39.203318  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:39.203406  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.203476  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.205942  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (1.658999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50562]
I0515 22:49:39.206171  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.633574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50564]
I0515 22:49:39.207484  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34/status: (3.126444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50558]
I0515 22:49:39.209745  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (1.347739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50564]
I0515 22:49:39.209969  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.210143  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:39.210166  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:39.210252  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.210301  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.214315  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.406665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50566]
I0515 22:49:39.214470  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.646276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50568]
I0515 22:49:39.214784  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35/status: (4.197898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50564]
I0515 22:49:39.214790  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (2.937977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50562]
E0515 22:49:39.215763  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:39.217161  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (1.138299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50566]
I0515 22:49:39.217719  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.217914  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:39.217937  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:39.218070  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.218132  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.221560  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.593718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0515 22:49:39.221669  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36/status: (3.267774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50566]
I0515 22:49:39.223530  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (1.391727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50568]
E0515 22:49:39.223845  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:39.224118  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (1.31437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50566]
I0515 22:49:39.224442  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.224692  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37
I0515 22:49:39.224715  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37
I0515 22:49:39.224816  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.224885  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.226955  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (1.280512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0515 22:49:39.227888  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.723749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0515 22:49:39.228377  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37/status: (3.212962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50568]
I0515 22:49:39.230076  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-37: (1.093327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0515 22:49:39.230390  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.230630  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:39.230655  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:39.230767  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.230832  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.233329  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (1.94331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0515 22:49:39.234186  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38/status: (2.625728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0515 22:49:39.234944  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.369192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0515 22:49:39.236205  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (1.144949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50572]
I0515 22:49:39.236577  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.237522  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:39.237548  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:39.237669  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.237723  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.239001  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (1.063288ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0515 22:49:39.239913  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (1.632031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0515 22:49:39.240214  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.240686  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:39.240748  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:39.240917  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.241012  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.241298  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-13.159efce4660664fe: (2.408521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0515 22:49:39.242842  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (1.014248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0515 22:49:39.244067  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.02102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0515 22:49:39.244581  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39/status: (2.995302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50570]
I0515 22:49:39.246369  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (1.200633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0515 22:49:39.246708  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.246929  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:39.246953  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:39.247078  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.247139  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.249941  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40/status: (2.550299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0515 22:49:39.250097  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (2.025706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50574]
I0515 22:49:39.250129  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.786735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50578]
E0515 22:49:39.250754  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:39.252325  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (1.22597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50578]
I0515 22:49:39.253470  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.253693  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:39.253719  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:39.253862  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.253939  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.256718  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.38921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0515 22:49:39.257905  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41/status: (3.31258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50580]
I0515 22:49:39.258870  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (4.707901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50578]
E0515 22:49:39.259173  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:39.259604  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (1.074556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50580]
I0515 22:49:39.259929  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.260118  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:39.260256  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:39.260514  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.260626  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.262986  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.031833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0515 22:49:39.263256  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (2.339056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50578]
I0515 22:49:39.264984  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42/status: (3.027015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50582]
I0515 22:49:39.266734  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (1.16872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50578]
I0515 22:49:39.266978  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.267154  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:39.267181  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:39.267358  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.267417  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.269602  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.76173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0515 22:49:39.269874  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (2.12019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50578]
I0515 22:49:39.271779  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43/status: (2.229583ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50584]
I0515 22:49:39.273711  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (1.217739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50578]
I0515 22:49:39.274035  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.274257  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:39.274283  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:39.274401  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.274470  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.277420  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44/status: (2.652271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50578]
I0515 22:49:39.278010  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.037152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0515 22:49:39.278120  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (1.943768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50586]
E0515 22:49:39.278419  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:39.280229  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (1.295959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0515 22:49:39.280560  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.280773  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:39.280799  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:39.280926  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.280984  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.282825  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (1.251487ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50588]
I0515 22:49:39.283797  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45/status: (2.375338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50586]
I0515 22:49:39.284882  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.38907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0515 22:49:39.285962  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (1.291103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50586]
I0515 22:49:39.286310  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.286624  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46
I0515 22:49:39.286652  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46
I0515 22:49:39.286775  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.286829  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.289220  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (1.245311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50588]
I0515 22:49:39.290883  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46/status: (3.810123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0515 22:49:39.291140  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (2.652608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50590]
I0515 22:49:39.293041  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-46: (1.338143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50576]
I0515 22:49:39.293398  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.293669  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47
I0515 22:49:39.293694  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47
I0515 22:49:39.293831  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.293906  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.296211  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (1.908308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50588]
I0515 22:49:39.296545  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.801296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0515 22:49:39.297759  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47/status: (3.533149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50590]
I0515 22:49:39.299556  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-47: (1.183501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0515 22:49:39.299901  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.300194  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48
I0515 22:49:39.300225  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48
I0515 22:49:39.300355  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.300423  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.302311  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (1.277264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50588]
I0515 22:49:39.303584  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48/status: (2.504053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0515 22:49:39.305158  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (3.917738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50594]
I0515 22:49:39.305249  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-48: (1.191484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0515 22:49:39.305561  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.305931  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:39.305959  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19
I0515 22:49:39.306106  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.306225  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.308992  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (2.510351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0515 22:49:39.310264  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (2.758593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50588]
I0515 22:49:39.313339  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-19.159efce46a52b22a: (2.296559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50588]
I0515 22:49:39.313686  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.313945  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:39.313978  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:39.314162  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.314351  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.316541  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (1.776855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0515 22:49:39.316822  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.772836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50598]
I0515 22:49:39.317811  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.59474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50600]
I0515 22:49:39.318303  109070 wrap.go:47] PUT /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49/status: (3.304338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50588]
E0515 22:49:39.318810  109070 factory.go:686] pod is already present in the activeQ
I0515 22:49:39.320742  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (1.336828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50598]
I0515 22:49:39.321060  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.321271  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:39.321294  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23
I0515 22:49:39.321432  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.321540  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.323659  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (1.388703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50598]
I0515 22:49:39.324724  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (2.499918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0515 22:49:39.325746  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.326010  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:39.326038  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25
I0515 22:49:39.326153  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.326226  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.327694  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (1.205723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0515 22:49:39.329106  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.329267  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:39.329337  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28
I0515 22:49:39.329466  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.329604  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.329982  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-23.159efce46cd72dd2: (7.672335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0515 22:49:39.332285  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (5.14636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50598]
I0515 22:49:39.332673  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (2.444083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0515 22:49:39.332906  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.333103  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:39.333130  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31
I0515 22:49:39.333253  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.333325  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.334262  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-25.159efce46dbc78ea: (3.511871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0515 22:49:39.335238  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (1.219395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50598]
I0515 22:49:39.335389  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (4.7313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0515 22:49:39.337002  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-28.159efce46f37721b: (1.96716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50602]
I0515 22:49:39.337201  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-31: (2.698255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50592]
I0515 22:49:39.337550  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.337811  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:39.337874  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35
I0515 22:49:39.337989  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.338049  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.340017  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-31.159efce4705efdd4: (2.190584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0515 22:49:39.340332  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (2.096747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50598]
I0515 22:49:39.340600  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-35: (1.87958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0515 22:49:39.341205  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.341486  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:39.341583  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36
I0515 22:49:39.341759  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.341828  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.343125  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-35.159efce471e414df: (2.033095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0515 22:49:39.343948  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (1.24236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0515 22:49:39.344003  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-36: (1.267249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50598]
I0515 22:49:39.344198  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.344342  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:39.344362  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40
I0515 22:49:39.344438  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.344525  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.346046  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (1.222296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0515 22:49:39.346289  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-40: (1.58381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0515 22:49:39.346711  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.347976  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:39.348007  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41
I0515 22:49:39.348162  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.348241  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.350809  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (2.333696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0515 22:49:39.351718  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-41: (3.143097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0515 22:49:39.353220  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-36.159efce4725b99f9: (4.639956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50608]
I0515 22:49:39.353288  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.353466  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:39.353546  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44
I0515 22:49:39.353747  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.353849  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.357020  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (1.68361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0515 22:49:39.357034  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-44: (2.683354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0515 22:49:39.357283  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.357441  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:39.357474  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49
I0515 22:49:39.357700  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.357755  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.358717  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-40.159efce4741622d8: (4.562737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0515 22:49:39.358993  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (1.051905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0515 22:49:39.359242  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.359770  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-49: (1.643013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0515 22:49:39.361636  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-41.159efce4747dc984: (2.102753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50606]
I0515 22:49:39.364589  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-44.159efce475b7385a: (2.123483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0515 22:49:39.367191  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-49.159efce47817b125: (1.93615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0515 22:49:39.418304  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (2.144988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0515 22:49:39.518524  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (2.288239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0515 22:49:39.622534  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.773845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0515 22:49:39.718223  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.917304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0515 22:49:39.818088  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.915533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0515 22:49:39.873395  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:39.873869  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:39.873895  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0
I0515 22:49:39.874058  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.874101  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:39.874119  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.874323  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:39.874342  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:39.875919  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:39.877951  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (2.851085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0515 22:49:39.878230  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.878434  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:39.878474  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2
I0515 22:49:39.878604  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.878647  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.879399  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (4.350258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0515 22:49:39.882702  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (2.59058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0515 22:49:39.883035  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.883408  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:39.883425  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1
I0515 22:49:39.883683  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:39.883743  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:39.887843  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (3.602569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:39.888481  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (7.995014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0515 22:49:39.887490  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (2.820555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50610]
I0515 22:49:39.889486  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:39.904891  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-0.159efce45c0256c3: (29.655448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50614]
I0515 22:49:39.908742  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-2.159efce45d596ed6: (2.823316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0515 22:49:39.916281  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-1.159efce45ce27d43: (6.80846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0515 22:49:39.918114  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.857857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.019229  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (2.901302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.118042  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.808886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.218726  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (2.437794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
E0515 22:49:40.304586  109070 event.go:249] Unable to write event: 'Patch http://127.0.0.1:32979/api/v1/namespaces/permit-plugin0bc52f0e-adde-4a94-9004-89ef0a977a19/events/test-pod.159efcd7cf9116b7: dial tcp 127.0.0.1:32979: connect: connection refused' (may retry after sleeping)
I0515 22:49:40.318115  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.974177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.418037  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.858932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.517646  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.540643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.625130  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (8.815031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.718141  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.908061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.766753  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod
I0515 22:49:40.766798  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod
I0515 22:49:40.767045  109070 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod", node "node1"
I0515 22:49:40.767065  109070 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0515 22:49:40.767137  109070 factory.go:711] Attempting to bind preemptor-pod to node1
I0515 22:49:40.768675  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:40.768699  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3
I0515 22:49:40.768814  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.768868  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.770694  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod/binding: (3.000764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.770864  109070 scheduler.go:589] pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0515 22:49:40.772888  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (2.952867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50604]
I0515 22:49:40.773197  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (2.205877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.773742  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-3.159efce45e0a9714: (3.665768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50628]
I0515 22:49:40.773780  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.774667  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:40.774687  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4
I0515 22:49:40.774792  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.774841  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.776004  109070 wrap.go:47] POST /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events: (1.808326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.777284  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (1.845335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50626]
I0515 22:49:40.777907  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.778096  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (2.682073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50630]
I0515 22:49:40.778304  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:40.778323  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5
I0515 22:49:40.778420  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.778475  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.780907  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-4.159efce45f8f9a0f: (3.737899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.780973  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (1.920598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50630]
I0515 22:49:40.781262  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (2.413179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50626]
I0515 22:49:40.781348  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.781717  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:40.781740  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6
I0515 22:49:40.781827  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.781869  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.783326  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (1.007616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50632]
I0515 22:49:40.783653  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (1.629088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50630]
I0515 22:49:40.783870  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.783989  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-5.159efce461317020: (2.104373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50616]
I0515 22:49:40.784016  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:40.784036  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10
I0515 22:49:40.784128  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.784165  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.786541  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (1.562458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50632]
I0515 22:49:40.787070  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (2.052525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50630]
I0515 22:49:40.787311  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.787512  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:40.787534  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11
I0515 22:49:40.787579  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-6.159efce461bcb1f1: (2.238203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50634]
I0515 22:49:40.787642  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.787681  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.790893  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (2.903029ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50632]
I0515 22:49:40.791090  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-10.159efce46404f00f: (2.726601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50636]
I0515 22:49:40.791187  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (3.302961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50630]
I0515 22:49:40.791468  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.791757  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:40.791775  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12
I0515 22:49:40.791959  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.792006  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.794006  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (1.356744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50638]
I0515 22:49:40.794433  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-11.159efce465223e6f: (2.465527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50630]
I0515 22:49:40.795180  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (2.557917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50632]
I0515 22:49:40.795528  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.795778  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:40.795801  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14
I0515 22:49:40.795885  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.795929  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.796779  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-12.159efce465a0be70: (1.731387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50630]
I0515 22:49:40.798108  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (1.623336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50638]
I0515 22:49:40.799991  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (3.731576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50632]
I0515 22:49:40.800333  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.801065  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-14.159efce466927884: (2.147023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50630]
I0515 22:49:40.801555  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:40.801574  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15
I0515 22:49:40.801683  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.801724  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.806519  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-15.159efce467a4e059: (3.806158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50640]
I0515 22:49:40.806735  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (3.237544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50638]
I0515 22:49:40.807023  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (3.207458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50632]
I0515 22:49:40.807235  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.807387  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:40.807438  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16
I0515 22:49:40.807742  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.807922  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.811028  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-16.159efce4687f70a1: (2.156518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0515 22:49:40.811280  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (2.645743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50640]
I0515 22:49:40.811792  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.811959  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:40.812023  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17
I0515 22:49:40.812144  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.812239  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.815340  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-17.159efce4693ef423: (2.233699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50644]
I0515 22:49:40.815869  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (3.397351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0515 22:49:40.816144  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (3.502546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50640]
I0515 22:49:40.816806  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.818233  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/preemptor-pod: (1.359075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50640]
I0515 22:49:40.818617  109070 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0515 22:49:40.820242  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-0: (1.456278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50644]
I0515 22:49:40.821113  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:40.821151  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18
I0515 22:49:40.821274  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.821317  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.821722  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-1: (1.024809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50644]
I0515 22:49:40.824668  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-2: (1.903947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50652]
I0515 22:49:40.825911  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (3.652006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0515 22:49:40.826279  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-18.159efce469bf26ed: (3.581984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50648]
I0515 22:49:40.827565  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.827970  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-3: (2.908876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50652]
I0515 22:49:40.829011  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (20.357461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50638]
I0515 22:49:40.829932  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (7.648293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50644]
I0515 22:49:40.831657  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-4: (1.309319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50642]
I0515 22:49:40.833691  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-5: (1.309811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50638]
I0515 22:49:40.835396  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-6: (1.163554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50638]
I0515 22:49:40.836535  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:40.836612  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20
I0515 22:49:40.836836  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.836947  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.840326  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (2.974058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0515 22:49:40.840537  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (2.439901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0515 22:49:40.840677  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (4.722821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50638]
I0515 22:49:40.841121  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.841532  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:40.841560  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21
I0515 22:49:40.841661  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.841721  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.843440  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (1.331346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0515 22:49:40.843656  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (1.733486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0515 22:49:40.843864  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.844145  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:40.844165  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22
I0515 22:49:40.844251  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.844290  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.846219  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (1.433873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0515 22:49:40.846714  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (2.210392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0515 22:49:40.847102  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (3.93083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50658]
I0515 22:49:40.847777  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.848324  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:40.848348  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24
I0515 22:49:40.848589  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.848655  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.849824  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (1.798949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50658]
I0515 22:49:40.852064  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (2.782635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0515 22:49:40.852461  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (3.461842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0515 22:49:40.852928  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-10: (2.575609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50658]
I0515 22:49:40.853847  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.854995  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:40.855027  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7
I0515 22:49:40.855184  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.855297  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.855799  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-11: (2.006323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50658]
I0515 22:49:40.859271  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (3.273822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0515 22:49:40.859767  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-7: (4.051258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0515 22:49:40.860143  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-12: (3.272726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50658]
I0515 22:49:40.860618  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.861071  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:40.861225  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8
I0515 22:49:40.861441  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.861648  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.866196  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-20.159efce46b99d291: (26.155329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50656]
I0515 22:49:40.866951  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (4.932559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0515 22:49:40.867255  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-8: (5.350302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0515 22:49:40.868728  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.869146  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:40.869166  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27
I0515 22:49:40.869254  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.869295  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.874139  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (3.395603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0515 22:49:40.874694  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-21.159efce46c23955d: (3.709154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0515 22:49:40.875673  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.875887  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:40.875922  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9
I0515 22:49:40.876021  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.876065  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.878915  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:40.879412  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:40.879438  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:40.879468  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:40.879579  109070 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0515 22:49:40.880104  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (3.42541ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0515 22:49:40.880744  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-22.159efce46c8a56cf: (3.442979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0515 22:49:40.883675  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (11.150623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.884289  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-24.159efce46d3c82d0: (2.616927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0515 22:49:40.884809  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (3.779241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50658]
I0515 22:49:40.885110  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-9: (8.067661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0515 22:49:40.885416  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.887711  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:40.887753  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29
I0515 22:49:40.887857  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.887927  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.889656  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-14: (2.380156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0515 22:49:40.890163  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-7.159efce46276e566: (3.836151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0515 22:49:40.895998  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (5.041733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0515 22:49:40.896334  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-8.159efce462e3f80e: (4.094665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50654]
I0515 22:49:40.897091  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.896736  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (6.101828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50664]
I0515 22:49:40.897307  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:40.897323  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30
I0515 22:49:40.896866  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-15: (4.905535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50650]
I0515 22:49:40.900210  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.900264  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.902798  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (1.686901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0515 22:49:40.903088  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-30: (2.436597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.903553  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.904012  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:40.904039  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32
I0515 22:49:40.904131  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.904172  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.906248  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (1.438597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.908796  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-32: (3.769248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0515 22:49:40.909166  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.909501  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:40.909524  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33
I0515 22:49:40.909620  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.909664  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.909969  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-16: (1.626848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.911588  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (1.248155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0515 22:49:40.912692  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-17: (2.038722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.914230  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-18: (1.112906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.916192  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-19: (1.319148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.916844  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-33: (1.217707ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0515 22:49:40.917181  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.917839  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:40.917876  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34
I0515 22:49:40.917971  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.918024  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.920527  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (1.93142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0515 22:49:40.920961  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-20: (3.628136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.921384  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-34: (3.008614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0515 22:49:40.922127  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.922637  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:40.922703  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38
I0515 22:49:40.922893  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.922998  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.924999  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (1.300061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0515 22:49:40.925334  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.925407  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-38: (2.078271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0515 22:49:40.925743  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:40.925811  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13
I0515 22:49:40.925964  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.926080  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.927746  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-27.159efce46e9d81e6: (21.784315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0515 22:49:40.928241  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (1.765559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0515 22:49:40.928487  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-21: (6.043638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.928666  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-13: (2.343865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50668]
I0515 22:49:40.929771  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.930137  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:40.930205  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39
I0515 22:49:40.930380  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.930434  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.932785  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-22: (1.729145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.933180  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (2.538663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0515 22:49:40.934126  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.936030  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:40.936059  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42
I0515 22:49:40.936165  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.936245  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.937573  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-9.159efce4636f8e92: (6.394496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0515 22:49:40.937587  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-23: (4.224024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.938061  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-39: (4.853019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50670]
I0515 22:49:40.941154  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (2.522828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.941529  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-29.159efce46fa8138b: (2.58186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50670]
I0515 22:49:40.941644  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-42: (4.833379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0515 22:49:40.942149  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-24: (3.432118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50666]
I0515 22:49:40.942890  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.943645  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:40.943720  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43
I0515 22:49:40.943888  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.944012  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.946464  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-30.159efce4700eba02: (3.959515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50670]
I0515 22:49:40.948317  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (2.499249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50674]
I0515 22:49:40.948824  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-25: (5.77179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.949294  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-43: (3.262256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0515 22:49:40.949705  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.950556  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-32.159efce470c86cee: (2.5651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50670]
I0515 22:49:40.952299  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-26: (1.585747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.954103  109070 wrap.go:47] PATCH /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/events/ppod-33.159efce4712ba76a: (2.498572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50674]
I0515 22:49:40.955265  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-27: (2.406234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.957580  109070 scheduling_queue.go:795] About to try and schedule pod preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:40.957611  109070 scheduler.go:452] Attempting to schedule pod: preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45
I0515 22:49:40.957806  109070 factory.go:649] Unable to schedule preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0515 22:49:40.957882  109070 factory.go:720] Updating pod condition for preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0515 22:49:40.959660  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-28: (3.713233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.960755  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (1.818578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50672]
I0515 22:49:40.961765  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-29: (1.58175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50660]
I0515 22:49:40.962052  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4d6a-9f35-7c124e0fd860/pods/ppod-45: (2.97975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:50662]
I0515 22:49:40.962524  109070 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0515 22:49:40.964734  109070 wrap.go:47] GET /api/v1/namespaces/preemption-race047fa48c-a397-4