This job view page is being replaced by Spyglass soon. Check out the new job view.
PRroycaihw: Promote Publish CRD OpenAPI to beta
ResultFAILURE
Tests 3 failed / 1315 succeeded
Started2019-05-16 00:17
Elapsed29m51s
Revision
Buildergke-prow-containerd-pool-99179761-k7t7
Refs master:aaec77a9
77825:eb35525d
podb497a3f2-776f-11e9-8ee0-0a580a6c0dad
infra-commit3350b5955
podb497a3f2-776f-11e9-8ee0-0a580a6c0dad
repok8s.io/kubernetes
repo-commit4e50dcd2deba45ec3d7dcdfb6a7dffcc50c9bb88
repos{u'k8s.io/kubernetes': u'master:aaec77a94b67878ca1bdd884f2778f4388d203f2,77825:eb35525d4a9556bcd12f15950fdae3428c7bc1cd'}

Test Failures


k8s.io/kubernetes/test/integration/apiserver TestAPICRDProtobuf 0.00s

go test -v k8s.io/kubernetes/test/integration/apiserver -run TestAPICRDProtobuf$
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:204 +0xc8
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:183 +0x35
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:209 +0xa6
/usr/local/go/src/net/http/server.go:2007 +0x213
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz/healthz.go:185 +0x43d
/usr/local/go/src/net/http/server.go:1995 +0x44
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:241 +0x548
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x85
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:154 +0x6c3
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x4fa
/usr/local/go/src/net/http/server.go:1995 +0x44
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:160 +0x5c7
/usr/local/go/src/net/http/server.go:1995 +0x44
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x1ec3
/usr/local/go/src/net/http/server.go:1995 +0x44
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:81 +0x527
/usr/local/go/src/net/http/server.go:1995 +0x44
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:111 +0xb3
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:98 +0x1b1
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:69 +0x7b
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51 +0x82
/usr/local/go/src/runtime/panic.go:522 +0x1b5
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/controller/openapi/aggregator.go:26 +0x4a
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/controller/openapi/controller.go:219 +0x20e
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/controller/openapi/controller.go:111 +0x476
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/apiserver.go:217 +0x1f9
panic: runtime error: invalid memory address or nil pointer dereference
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105
/usr/local/go/src/runtime/panic.go:522 +0x1b5
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/controller/openapi/aggregator.go:26 +0x4a
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/controller/openapi/controller.go:219 +0x20e
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/controller/openapi/controller.go:111 +0x476
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apiserver/apiserver.go:217 +0x1f9
				from junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190516-003326.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestPreemptionRaces 31s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptionRaces$
I0516 00:39:41.467095  108888 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0516 00:39:41.467152  108888 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0516 00:39:41.467493  108888 master.go:277] Node port range unspecified. Defaulting to 30000-32767.
I0516 00:39:41.467511  108888 master.go:233] Using reconciler: 
I0516 00:39:41.509086  108888 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.509505  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.509537  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.509764  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.522078  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.522618  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.522872  108888 store.go:1320] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0516 00:39:41.522969  108888 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.523058  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.523250  108888 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0516 00:39:41.523385  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.523407  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.523457  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.523609  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.523892  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.524132  108888 store.go:1320] Monitoring events count at <storage-prefix>//events
I0516 00:39:41.524155  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.524179  108888 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.524261  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.524280  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.524317  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.524368  108888 watch_cache.go:405] Replace watchCache (rev: 23996) 
I0516 00:39:41.524369  108888 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0516 00:39:41.524605  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.524978  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.525126  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.525174  108888 store.go:1320] Monitoring limitranges count at <storage-prefix>//limitranges
I0516 00:39:41.525207  108888 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.525319  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.525330  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.525379  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.525428  108888 watch_cache.go:405] Replace watchCache (rev: 23996) 
I0516 00:39:41.525461  108888 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0516 00:39:41.525758  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.526042  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.526153  108888 store.go:1320] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0516 00:39:41.526301  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.526428  108888 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0516 00:39:41.526440  108888 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.526624  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.526676  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.526688  108888 watch_cache.go:405] Replace watchCache (rev: 23996) 
I0516 00:39:41.526796  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.526898  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.527253  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.527380  108888 store.go:1320] Monitoring secrets count at <storage-prefix>//secrets
I0516 00:39:41.528183  108888 watch_cache.go:405] Replace watchCache (rev: 23996) 
I0516 00:39:41.528250  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.528381  108888 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0516 00:39:41.528406  108888 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.531113  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.531168  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.531281  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.531376  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.531831  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.532031  108888 store.go:1320] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0516 00:39:41.532354  108888 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.532474  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.532512  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.532565  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.532658  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.532701  108888 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0516 00:39:41.533087  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.533497  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.533651  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.533681  108888 store.go:1320] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0516 00:39:41.533735  108888 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0516 00:39:41.534109  108888 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.534279  108888 watch_cache.go:405] Replace watchCache (rev: 23998) 
I0516 00:39:41.534281  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.534422  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.534498  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.534550  108888 watch_cache.go:405] Replace watchCache (rev: 23998) 
I0516 00:39:41.534670  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.534766  108888 watch_cache.go:405] Replace watchCache (rev: 23998) 
I0516 00:39:41.535061  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.535187  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.535284  108888 store.go:1320] Monitoring configmaps count at <storage-prefix>//configmaps
I0516 00:39:41.535459  108888 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0516 00:39:41.536524  108888 watch_cache.go:405] Replace watchCache (rev: 23998) 
I0516 00:39:41.536735  108888 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.536942  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.537001  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.537122  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.537261  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.537652  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.537896  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.538126  108888 store.go:1320] Monitoring namespaces count at <storage-prefix>//namespaces
I0516 00:39:41.539013  108888 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0516 00:39:41.546898  108888 watch_cache.go:405] Replace watchCache (rev: 23998) 
I0516 00:39:41.552017  108888 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.552790  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.552850  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.552958  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.553121  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.553591  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.553682  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.553874  108888 store.go:1320] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0516 00:39:41.553898  108888 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0516 00:39:41.554175  108888 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.554284  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.554310  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.554371  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.554473  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.554755  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.555024  108888 watch_cache.go:405] Replace watchCache (rev: 23998) 
I0516 00:39:41.555188  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.555439  108888 store.go:1320] Monitoring nodes count at <storage-prefix>//minions
I0516 00:39:41.555515  108888 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0516 00:39:41.555904  108888 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.556943  108888 watch_cache.go:405] Replace watchCache (rev: 23998) 
I0516 00:39:41.557663  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.557684  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.557720  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.557775  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.558313  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.558446  108888 store.go:1320] Monitoring pods count at <storage-prefix>//pods
I0516 00:39:41.558488  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.558636  108888 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.558678  108888 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0516 00:39:41.558712  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.558727  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.558756  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.558865  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.560049  108888 watch_cache.go:405] Replace watchCache (rev: 23998) 
I0516 00:39:41.560103  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.560195  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.560236  108888 store.go:1320] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0516 00:39:41.560273  108888 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0516 00:39:41.560377  108888 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.560504  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.560541  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.560584  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.560647  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.561077  108888 watch_cache.go:405] Replace watchCache (rev: 23998) 
I0516 00:39:41.561539  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.561691  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.561778  108888 store.go:1320] Monitoring services count at <storage-prefix>//services/specs
I0516 00:39:41.561806  108888 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0516 00:39:41.561815  108888 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.562027  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.562049  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.562080  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.562128  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.562731  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.562813  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.562827  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.562863  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.562892  108888 watch_cache.go:405] Replace watchCache (rev: 23998) 
I0516 00:39:41.563241  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.563408  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.564490  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.564663  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.564672  108888 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.564731  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.564747  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.564774  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.564828  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.565512  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.565762  108888 store.go:1320] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0516 00:39:41.565796  108888 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0516 00:39:41.566083  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.566894  108888 watch_cache.go:405] Replace watchCache (rev: 23998) 
I0516 00:39:41.579041  108888 master.go:417] Skipping disabled API group "auditregistration.k8s.io".
I0516 00:39:41.579076  108888 master.go:425] Enabling API group "authentication.k8s.io".
I0516 00:39:41.579092  108888 master.go:425] Enabling API group "authorization.k8s.io".
I0516 00:39:41.579282  108888 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.579417  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.579429  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.579475  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.579565  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.579952  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.580097  108888 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0516 00:39:41.580253  108888 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.580326  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.580346  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.580372  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.580395  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.580448  108888 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0516 00:39:41.581586  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.582746  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.582884  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.583054  108888 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0516 00:39:41.583147  108888 watch_cache.go:405] Replace watchCache (rev: 24001) 
I0516 00:39:41.583313  108888 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.583427  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.583455  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.583518  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.583546  108888 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0516 00:39:41.583633  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.584056  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.584170  108888 store.go:1320] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0516 00:39:41.584224  108888 master.go:425] Enabling API group "autoscaling".
I0516 00:39:41.584387  108888 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.584472  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.584486  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.584516  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.584571  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.584607  108888 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0516 00:39:41.584807  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.585103  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.585228  108888 store.go:1320] Monitoring jobs.batch count at <storage-prefix>//jobs
I0516 00:39:41.585370  108888 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.585433  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.585442  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.585473  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.585487  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.585491  108888 watch_cache.go:405] Replace watchCache (rev: 24001) 
I0516 00:39:41.585526  108888 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0516 00:39:41.585545  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.585814  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.585867  108888 watch_cache.go:405] Replace watchCache (rev: 24001) 
I0516 00:39:41.585897  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.585979  108888 store.go:1320] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0516 00:39:41.585996  108888 master.go:425] Enabling API group "batch".
I0516 00:39:41.586061  108888 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0516 00:39:41.586128  108888 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.586190  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.586199  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.586228  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.586327  108888 watch_cache.go:405] Replace watchCache (rev: 24001) 
I0516 00:39:41.586340  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.586572  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.586600  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.586651  108888 store.go:1320] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0516 00:39:41.586665  108888 master.go:425] Enabling API group "certificates.k8s.io".
I0516 00:39:41.586755  108888 watch_cache.go:405] Replace watchCache (rev: 24001) 
I0516 00:39:41.586783  108888 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.586833  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.586843  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.586878  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.586941  108888 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0516 00:39:41.587089  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.587324  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.587424  108888 store.go:1320] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0516 00:39:41.587563  108888 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.587632  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.587642  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.587672  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.587723  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.587756  108888 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0516 00:39:41.587943  108888 watch_cache.go:405] Replace watchCache (rev: 24002) 
I0516 00:39:41.587996  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.588225  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.588315  108888 store.go:1320] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0516 00:39:41.588328  108888 master.go:425] Enabling API group "coordination.k8s.io".
I0516 00:39:41.588402  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.588452  108888 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0516 00:39:41.588452  108888 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.588525  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.588548  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.588577  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.588622  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.588676  108888 watch_cache.go:405] Replace watchCache (rev: 24002) 
I0516 00:39:41.588850  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.588926  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.588953  108888 store.go:1320] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0516 00:39:41.589091  108888 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.589159  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.589168  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.589197  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.589241  108888 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0516 00:39:41.589373  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.589672  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.589809  108888 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0516 00:39:41.589992  108888 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.590061  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.590076  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.590108  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.590141  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.590184  108888 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0516 00:39:41.590258  108888 watch_cache.go:405] Replace watchCache (rev: 24002) 
I0516 00:39:41.590344  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.590425  108888 watch_cache.go:405] Replace watchCache (rev: 24002) 
I0516 00:39:41.591410  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.591553  108888 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0516 00:39:41.591597  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.591700  108888 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.591767  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.591781  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.591816  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.591879  108888 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0516 00:39:41.592121  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.592691  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.594491  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.595076  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.595219  108888 store.go:1320] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0516 00:39:41.595289  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.595328  108888 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0516 00:39:41.595381  108888 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.595451  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.595460  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.595494  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.595547  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.595775  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.595821  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.595909  108888 store.go:1320] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0516 00:39:41.596046  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.596071  108888 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.596149  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.596165  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.596198  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.596235  108888 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0516 00:39:41.596398  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.596767  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.596841  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.596953  108888 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0516 00:39:41.597072  108888 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0516 00:39:41.597100  108888 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.597160  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.597170  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.597199  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.597292  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.598175  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.598192  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.598885  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.599001  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.599062  108888 store.go:1320] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0516 00:39:41.599090  108888 master.go:425] Enabling API group "extensions".
I0516 00:39:41.599122  108888 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0516 00:39:41.599245  108888 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.599332  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.599353  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.599399  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.599472  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.599738  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.599865  108888 store.go:1320] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0516 00:39:41.600036  108888 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.600140  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.600157  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.600197  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.600260  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.600308  108888 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0516 00:39:41.600431  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.600697  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.600780  108888 store.go:1320] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0516 00:39:41.600801  108888 master.go:425] Enabling API group "networking.k8s.io".
I0516 00:39:41.600828  108888 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.600873  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.600885  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.600908  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.600965  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.600990  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.600993  108888 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0516 00:39:41.601271  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.601114  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.601683  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.601782  108888 store.go:1320] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0516 00:39:41.601801  108888 master.go:425] Enabling API group "node.k8s.io".
I0516 00:39:41.601934  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.601981  108888 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.602064  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.602080  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.602108  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.602154  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.602175  108888 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0516 00:39:41.602297  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.602792  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.602929  108888 store.go:1320] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0516 00:39:41.602975  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.603017  108888 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0516 00:39:41.603024  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.603070  108888 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.603137  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.603151  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.603201  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.603246  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.603467  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.603576  108888 store.go:1320] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0516 00:39:41.603588  108888 master.go:425] Enabling API group "policy".
I0516 00:39:41.603610  108888 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.604298  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.604339  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.604401  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.604414  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.604466  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.604340  108888 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0516 00:39:41.604639  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.604880  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.604973  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.604998  108888 store.go:1320] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0516 00:39:41.605096  108888 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0516 00:39:41.605142  108888 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.605212  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.605228  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.605254  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.605311  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.605446  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.605660  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.605687  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.605777  108888 store.go:1320] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0516 00:39:41.605807  108888 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.605831  108888 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0516 00:39:41.605873  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.605886  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.605928  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.605985  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.606236  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.606295  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.606441  108888 store.go:1320] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0516 00:39:41.606469  108888 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0516 00:39:41.606584  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.606641  108888 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.606685  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.606712  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.606728  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.606755  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.606813  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.607051  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.607191  108888 store.go:1320] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0516 00:39:41.607234  108888 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.607293  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.607296  108888 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0516 00:39:41.607192  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.607303  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.607348  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.607398  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.607626  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.607731  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.607784  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.607848  108888 store.go:1320] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0516 00:39:41.607994  108888 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.608017  108888 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0516 00:39:41.608084  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.608101  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.608128  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.608189  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.608314  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.608429  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.608569  108888 store.go:1320] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0516 00:39:41.608643  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.608636  108888 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.608742  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.608746  108888 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0516 00:39:41.608758  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.608849  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.608881  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.608948  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.609170  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.609225  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.609251  108888 store.go:1320] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0516 00:39:41.609312  108888 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0516 00:39:41.609379  108888 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.609494  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.609550  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.609605  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.610094  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.610425  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.610442  108888 watch_cache.go:405] Replace watchCache (rev: 24003) 
I0516 00:39:41.610631  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.610668  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.610764  108888 store.go:1320] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0516 00:39:41.610845  108888 master.go:425] Enabling API group "rbac.authorization.k8s.io".
I0516 00:39:41.611159  108888 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0516 00:39:41.611775  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.612822  108888 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.612934  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.612951  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.612984  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.613064  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.613351  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.613448  108888 store.go:1320] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0516 00:39:41.613613  108888 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.613676  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.613694  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.613725  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.613739  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.613743  108888 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0516 00:39:41.613828  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.614151  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.614248  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.614291  108888 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0516 00:39:41.614273  108888 store.go:1320] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0516 00:39:41.614355  108888 master.go:425] Enabling API group "scheduling.k8s.io".
I0516 00:39:41.614509  108888 master.go:417] Skipping disabled API group "settings.k8s.io".
I0516 00:39:41.614627  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.614714  108888 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.614781  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.614804  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.614821  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.614857  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.614905  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.615128  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.615211  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.615233  108888 store.go:1320] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0516 00:39:41.615250  108888 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0516 00:39:41.615405  108888 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.615513  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.615558  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.615620  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.615702  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.615827  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.616090  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.617998  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.618143  108888 store.go:1320] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0516 00:39:41.618181  108888 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.618206  108888 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0516 00:39:41.618413  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.618436  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.618524  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.618618  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.619041  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.619079  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.619156  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.619277  108888 store.go:1320] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0516 00:39:41.619297  108888 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0516 00:39:41.619309  108888 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.619385  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.619401  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.619437  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.619483  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.620291  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.620433  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.620367  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.620565  108888 store.go:1320] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0516 00:39:41.620667  108888 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0516 00:39:41.620709  108888 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.620796  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.620814  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.620841  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.620894  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.621483  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.621611  108888 store.go:1320] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0516 00:39:41.621726  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.621764  108888 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.621829  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.621851  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.621884  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.621907  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.621964  108888 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0516 00:39:41.622077  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.622282  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.622362  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.622379  108888 store.go:1320] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0516 00:39:41.622419  108888 master.go:425] Enabling API group "storage.k8s.io".
I0516 00:39:41.622397  108888 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0516 00:39:41.622592  108888 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.622782  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.622797  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.622826  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.622667  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.622896  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.623160  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.623250  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.623292  108888 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0516 00:39:41.623406  108888 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0516 00:39:41.623419  108888 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.623483  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.623499  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.623526  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.623638  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.623646  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.623906  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.624055  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.624084  108888 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0516 00:39:41.624173  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.624211  108888 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.624284  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.624302  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.624329  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.624373  108888 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0516 00:39:41.624522  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.624804  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.624865  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.624957  108888 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0516 00:39:41.625047  108888 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0516 00:39:41.625109  108888 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.625163  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.625199  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.625214  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.625242  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.625290  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.625517  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.625579  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.625689  108888 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0516 00:39:41.625712  108888 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0516 00:39:41.625824  108888 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.625943  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.625974  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.626019  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.626078  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.626433  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.626475  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.626727  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.626762  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.626872  108888 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0516 00:39:41.626962  108888 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0516 00:39:41.627033  108888 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.627098  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.627107  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.627135  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.627199  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.627665  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.627666  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.627709  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.627848  108888 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0516 00:39:41.627934  108888 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0516 00:39:41.628014  108888 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.628089  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.628106  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.628153  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.628226  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.628579  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.628617  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.628696  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.628755  108888 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0516 00:39:41.628882  108888 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0516 00:39:41.628987  108888 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.629059  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.629075  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.629104  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.629149  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.629451  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.629562  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.629582  108888 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0516 00:39:41.629563  108888 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0516 00:39:41.629739  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.629756  108888 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.629825  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.629841  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.629870  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.629978  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.630232  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.630284  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.630358  108888 store.go:1320] Monitoring deployments.apps count at <storage-prefix>//deployments
I0516 00:39:41.630420  108888 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0516 00:39:41.630480  108888 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.630550  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.630558  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.630581  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.630624  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.630772  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.630861  108888 store.go:1320] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0516 00:39:41.630888  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.630983  108888 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0516 00:39:41.631123  108888 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.631194  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.631205  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.631303  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.631367  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.631647  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.631712  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.631831  108888 store.go:1320] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0516 00:39:41.631876  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.631946  108888 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0516 00:39:41.631951  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.631992  108888 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.632046  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.632053  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.632074  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.632162  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.632254  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.632392  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.632508  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.632625  108888 store.go:1320] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0516 00:39:41.632723  108888 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0516 00:39:41.632792  108888 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.632894  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.632946  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.633010  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.633083  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.633467  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.633543  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.633610  108888 store.go:1320] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0516 00:39:41.633632  108888 master.go:425] Enabling API group "apps".
I0516 00:39:41.633509  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.633658  108888 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.633706  108888 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0516 00:39:41.633737  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.633712  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.633829  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.633929  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.633979  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.634254  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.634370  108888 store.go:1320] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0516 00:39:41.634405  108888 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.634466  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.634468  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.634492  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.634493  108888 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0516 00:39:41.634526  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.634649  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.634847  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.634864  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.635028  108888 store.go:1320] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0516 00:39:41.635080  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.635099  108888 master.go:425] Enabling API group "admissionregistration.k8s.io".
I0516 00:39:41.635180  108888 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"513c026f-215f-45e9-8b35-cf30867c5717", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0516 00:39:41.635119  108888 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0516 00:39:41.635489  108888 client.go:354] parsed scheme: ""
I0516 00:39:41.635511  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:41.635553  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:41.635615  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.636131  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.636176  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
I0516 00:39:41.636157  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:41.636341  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:41.636433  108888 store.go:1320] Monitoring events count at <storage-prefix>//events
I0516 00:39:41.636452  108888 master.go:425] Enabling API group "events.k8s.io".
I0516 00:39:41.636626  108888 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0516 00:39:41.637475  108888 watch_cache.go:405] Replace watchCache (rev: 24004) 
W0516 00:39:41.640673  108888 genericapiserver.go:347] Skipping API batch/v2alpha1 because it has no resources.
W0516 00:39:41.647266  108888 genericapiserver.go:347] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0516 00:39:41.651520  108888 genericapiserver.go:347] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0516 00:39:41.652347  108888 genericapiserver.go:347] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0516 00:39:41.654690  108888 genericapiserver.go:347] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0516 00:39:41.666020  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:41.666050  108888 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0516 00:39:41.666060  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:41.666071  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:41.666079  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:41.666087  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:41.666263  108888 wrap.go:47] GET /healthz: (359.295µs) 500
goroutine 36439 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0176afdc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0176afdc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013677840, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc017bc64e8, 0xc00005e1a0, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc017bc64e8, 0xc013589200)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc017bc64e8, 0xc013589200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc017bc64e8, 0xc013589200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc017bc64e8, 0xc013589200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc017bc64e8, 0xc013589200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc017bc64e8, 0xc013589200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc017bc64e8, 0xc013589200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc017bc64e8, 0xc013589200)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc017bc64e8, 0xc013589200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc017bc64e8, 0xc013589200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc017bc64e8, 0xc013589200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc017bc64e8, 0xc013589100)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc017bc64e8, 0xc013589100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0099a4840, 0xc017fecc60, 0x73aefc0, 0xc017bc64e8, 0xc013589100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52404]
I0516 00:39:41.667406  108888 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.326788ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52406]
I0516 00:39:41.669877  108888 wrap.go:47] GET /api/v1/services: (1.089654ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52406]
I0516 00:39:41.673593  108888 wrap.go:47] GET /api/v1/services: (1.03533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52406]
I0516 00:39:41.675860  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:41.675943  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:41.675972  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:41.676011  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:41.676047  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:41.676221  108888 wrap.go:47] GET /healthz: (513.025µs) 500
goroutine 36441 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0176afea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0176afea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc013677a80, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc017bc6500, 0xc0038f0900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc017bc6500, 0xc013589700)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc017bc6500, 0xc013589700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc017bc6500, 0xc013589700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc017bc6500, 0xc013589700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc017bc6500, 0xc013589700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc017bc6500, 0xc013589700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc017bc6500, 0xc013589700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc017bc6500, 0xc013589700)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc017bc6500, 0xc013589700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc017bc6500, 0xc013589700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc017bc6500, 0xc013589700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc017bc6500, 0xc013589600)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc017bc6500, 0xc013589600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0099a4a80, 0xc017fecc60, 0x73aefc0, 0xc017bc6500, 0xc013589600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52404]
I0516 00:39:41.677018  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.267713ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52406]
I0516 00:39:41.677721  108888 wrap.go:47] GET /api/v1/services: (942.537µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52404]
I0516 00:39:41.677992  108888 wrap.go:47] GET /api/v1/services: (1.350042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:41.679341  108888 wrap.go:47] POST /api/v1/namespaces: (1.440845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52406]
I0516 00:39:41.680682  108888 wrap.go:47] GET /api/v1/namespaces/kube-public: (941.687µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:41.682582  108888 wrap.go:47] POST /api/v1/namespaces: (1.124299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:41.683993  108888 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (954.827µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:41.685782  108888 wrap.go:47] POST /api/v1/namespaces: (1.397262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:41.767048  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:41.767101  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:41.767113  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:41.767139  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:41.767155  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:41.767359  108888 wrap.go:47] GET /healthz: (461.029µs) 500
goroutine 36445 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00366e620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00366e620, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010d21120, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc017bc6560, 0xc0038f1080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc017bc6560, 0xc0090d0300)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc017bc6560, 0xc0090d0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc017bc6560, 0xc0090d0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc017bc6560, 0xc0090d0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc017bc6560, 0xc0090d0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc017bc6560, 0xc0090d0300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc017bc6560, 0xc0090d0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc017bc6560, 0xc0090d0300)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc017bc6560, 0xc0090d0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc017bc6560, 0xc0090d0300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc017bc6560, 0xc0090d0300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc017bc6560, 0xc013589f00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc017bc6560, 0xc013589f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0099a5140, 0xc017fecc60, 0x73aefc0, 0xc017bc6560, 0xc013589f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:41.777167  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:41.777199  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:41.777210  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:41.777216  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:41.777221  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:41.777472  108888 wrap.go:47] GET /healthz: (437.797µs) 500
goroutine 36515 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0038320e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0038320e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010b8ef80, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00b96d4e8, 0xc000654600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8200)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8200)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8000)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00b96d4e8, 0xc0083e8000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009636180, 0xc017fecc60, 0x73aefc0, 0xc00b96d4e8, 0xc0083e8000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:41.867009  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:41.867045  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:41.867057  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:41.867066  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:41.867074  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:41.867225  108888 wrap.go:47] GET /healthz: (373.593µs) 500
goroutine 36530 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003662a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003662a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010b34540, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc01620aa20, 0xc0008a0a80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc01620aa20, 0xc009455300)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc01620aa20, 0xc009455300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc01620aa20, 0xc009455300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc01620aa20, 0xc009455300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc01620aa20, 0xc009455300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc01620aa20, 0xc009455300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc01620aa20, 0xc009455300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc01620aa20, 0xc009455300)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc01620aa20, 0xc009455300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc01620aa20, 0xc009455300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc01620aa20, 0xc009455300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc01620aa20, 0xc009455200)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc01620aa20, 0xc009455200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009856ea0, 0xc017fecc60, 0x73aefc0, 0xc01620aa20, 0xc009455200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:41.877155  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:41.877202  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:41.877214  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:41.877222  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:41.877228  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:41.877382  108888 wrap.go:47] GET /healthz: (351.724µs) 500
goroutine 36473 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0135a93b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0135a93b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012d45340, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc015c24810, 0xc003058480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc015c24810, 0xc01354b700)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc015c24810, 0xc01354b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc015c24810, 0xc01354b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc015c24810, 0xc01354b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc015c24810, 0xc01354b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc015c24810, 0xc01354b700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc015c24810, 0xc01354b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc015c24810, 0xc01354b700)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc015c24810, 0xc01354b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc015c24810, 0xc01354b700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc015c24810, 0xc01354b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc015c24810, 0xc01354b600)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc015c24810, 0xc01354b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b083080, 0xc017fecc60, 0x73aefc0, 0xc015c24810, 0xc01354b600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:41.967004  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:41.967042  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:41.967054  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:41.967064  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:41.967072  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:41.967234  108888 wrap.go:47] GET /healthz: (378.297µs) 500
goroutine 36532 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003662c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003662c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010b34920, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc01620aa48, 0xc0008a1080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc01620aa48, 0xc009455d00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc01620aa48, 0xc009455d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc01620aa48, 0xc009455d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc01620aa48, 0xc009455d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc01620aa48, 0xc009455d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc01620aa48, 0xc009455d00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc01620aa48, 0xc009455d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc01620aa48, 0xc009455d00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc01620aa48, 0xc009455d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc01620aa48, 0xc009455d00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc01620aa48, 0xc009455d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc01620aa48, 0xc009455c00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc01620aa48, 0xc009455c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009857020, 0xc017fecc60, 0x73aefc0, 0xc01620aa48, 0xc009455c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:41.977174  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:41.977210  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:41.977221  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:41.977244  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:41.977252  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:41.977412  108888 wrap.go:47] GET /healthz: (376.624µs) 500
goroutine 36363 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012cf01c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012cf01c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010e16f80, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc0167dc478, 0xc00c060600, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc0167dc478, 0xc009697400)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc0167dc478, 0xc009697400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc0167dc478, 0xc009697400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc0167dc478, 0xc009697400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc0167dc478, 0xc009697400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc0167dc478, 0xc009697400)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc0167dc478, 0xc009697400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc0167dc478, 0xc009697400)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc0167dc478, 0xc009697400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc0167dc478, 0xc009697400)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc0167dc478, 0xc009697400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc0167dc478, 0xc009697100)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc0167dc478, 0xc009697100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00992a3c0, 0xc017fecc60, 0x73aefc0, 0xc0167dc478, 0xc009697100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.067109  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:42.067151  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.067165  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.067175  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.067183  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.067325  108888 wrap.go:47] GET /healthz: (340.551µs) 500
goroutine 36517 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0038324d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0038324d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01081c2e0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00b96d540, 0xc000655080, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00b96d540, 0xc007c70300)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00b96d540, 0xc007c70300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00b96d540, 0xc007c70300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00b96d540, 0xc007c70300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00b96d540, 0xc007c70300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00b96d540, 0xc007c70300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00b96d540, 0xc007c70300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00b96d540, 0xc007c70300)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00b96d540, 0xc007c70300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00b96d540, 0xc007c70300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00b96d540, 0xc007c70300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00b96d540, 0xc007c70200)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00b96d540, 0xc007c70200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009637ce0, 0xc017fecc60, 0x73aefc0, 0xc00b96d540, 0xc007c70200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:42.077154  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:42.077190  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.077205  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.077214  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.077222  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.077395  108888 wrap.go:47] GET /healthz: (361.837µs) 500
goroutine 36546 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003655570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003655570, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010cb7840, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc011398148, 0xc000045380, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc011398148, 0xc00736c300)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc011398148, 0xc00736c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc011398148, 0xc00736c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc011398148, 0xc00736c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc011398148, 0xc00736c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc011398148, 0xc00736c300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc011398148, 0xc00736c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc011398148, 0xc00736c300)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc011398148, 0xc00736c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc011398148, 0xc00736c300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc011398148, 0xc00736c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc011398148, 0xc00736c200)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc011398148, 0xc00736c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0099ab4a0, 0xc017fecc60, 0x73aefc0, 0xc011398148, 0xc00736c200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.166976  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:42.167010  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.167019  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.167026  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.167032  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.167149  108888 wrap.go:47] GET /healthz: (328.937µs) 500
goroutine 36548 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003655880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003655880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010cb7b60, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc011398150, 0xc000045e00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc011398150, 0xc00736cd00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc011398150, 0xc00736cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc011398150, 0xc00736cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc011398150, 0xc00736cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc011398150, 0xc00736cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc011398150, 0xc00736cd00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc011398150, 0xc00736cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc011398150, 0xc00736cd00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc011398150, 0xc00736cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc011398150, 0xc00736cd00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc011398150, 0xc00736cd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc011398150, 0xc00736c600)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc011398150, 0xc00736c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0099ab5c0, 0xc017fecc60, 0x73aefc0, 0xc011398150, 0xc00736c600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:42.177133  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:42.177166  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.177176  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.177186  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.177194  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.177340  108888 wrap.go:47] GET /healthz: (327.941µs) 500
goroutine 36365 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012cf0380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012cf0380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010e17200, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc0167dc488, 0xc00c060d80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc0167dc488, 0xc009697f00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc0167dc488, 0xc009697f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc0167dc488, 0xc009697f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc0167dc488, 0xc009697f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc0167dc488, 0xc009697f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc0167dc488, 0xc009697f00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc0167dc488, 0xc009697f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc0167dc488, 0xc009697f00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc0167dc488, 0xc009697f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc0167dc488, 0xc009697f00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc0167dc488, 0xc009697f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc0167dc488, 0xc009697c00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc0167dc488, 0xc009697c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00992a4e0, 0xc017fecc60, 0x73aefc0, 0xc0167dc488, 0xc009697c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.267017  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:42.267049  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.267062  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.267071  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.267078  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.267225  108888 wrap.go:47] GET /healthz: (366.385µs) 500
goroutine 36519 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0038327e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0038327e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01081c4e0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00b96d568, 0xc000655680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00b96d568, 0xc007c71200)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00b96d568, 0xc007c71200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00b96d568, 0xc007c71200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00b96d568, 0xc007c71200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00b96d568, 0xc007c71200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00b96d568, 0xc007c71200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00b96d568, 0xc007c71200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00b96d568, 0xc007c71200)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00b96d568, 0xc007c71200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00b96d568, 0xc007c71200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00b96d568, 0xc007c71200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00b96d568, 0xc007c70f00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00b96d568, 0xc007c70f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0095e2060, 0xc017fecc60, 0x73aefc0, 0xc00b96d568, 0xc007c70f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:42.277144  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:42.277179  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.277190  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.277199  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.277206  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.277344  108888 wrap.go:47] GET /healthz: (337.564µs) 500
goroutine 36550 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003655a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003655a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010cb7c00, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc011398158, 0xc007e30480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc011398158, 0xc00736dc00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc011398158, 0xc00736dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc011398158, 0xc00736dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc011398158, 0xc00736dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc011398158, 0xc00736dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc011398158, 0xc00736dc00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc011398158, 0xc00736dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc011398158, 0xc00736dc00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc011398158, 0xc00736dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc011398158, 0xc00736dc00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc011398158, 0xc00736dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc011398158, 0xc00736d600)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc011398158, 0xc00736d600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0099ab680, 0xc017fecc60, 0x73aefc0, 0xc011398158, 0xc00736d600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.367076  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:42.367119  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.367130  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.367139  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.367147  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.367303  108888 wrap.go:47] GET /healthz: (398.12µs) 500
goroutine 36475 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0135a9500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0135a9500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012d45a80, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc015c24858, 0xc003058d80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc015c24858, 0xc00af62100)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc015c24858, 0xc00af62100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc015c24858, 0xc00af62100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc015c24858, 0xc00af62100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc015c24858, 0xc00af62100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc015c24858, 0xc00af62100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc015c24858, 0xc00af62100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc015c24858, 0xc00af62100)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc015c24858, 0xc00af62100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc015c24858, 0xc00af62100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc015c24858, 0xc00af62100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc015c24858, 0xc01354bf00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc015c24858, 0xc01354bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b083380, 0xc017fecc60, 0x73aefc0, 0xc015c24858, 0xc01354bf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:42.377173  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:42.377213  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.377225  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.377234  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.377241  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.377480  108888 wrap.go:47] GET /healthz: (442.206µs) 500
goroutine 36521 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003832af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003832af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01081c900, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00b96d590, 0xc000655e00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00b96d590, 0xc00afc8400)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00b96d590, 0xc00afc8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00b96d590, 0xc00afc8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00b96d590, 0xc00afc8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00b96d590, 0xc00afc8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00b96d590, 0xc00afc8400)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00b96d590, 0xc00afc8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00b96d590, 0xc00afc8400)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00b96d590, 0xc00afc8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00b96d590, 0xc00afc8400)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00b96d590, 0xc00afc8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00b96d590, 0xc00afc8300)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00b96d590, 0xc00afc8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0095e23c0, 0xc017fecc60, 0x73aefc0, 0xc00b96d590, 0xc00afc8300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.467015  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:42.467053  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.467065  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.467074  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.467082  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.467239  108888 wrap.go:47] GET /healthz: (382.831µs) 500
goroutine 36523 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003832ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003832ee0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01081cb20, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00b96d598, 0xc012f94480, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00b96d598, 0xc00afc8a00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00b96d598, 0xc00afc8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00b96d598, 0xc00afc8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00b96d598, 0xc00afc8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00b96d598, 0xc00afc8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00b96d598, 0xc00afc8a00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00b96d598, 0xc00afc8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00b96d598, 0xc00afc8a00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00b96d598, 0xc00afc8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00b96d598, 0xc00afc8a00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00b96d598, 0xc00afc8a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00b96d598, 0xc00afc8900)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00b96d598, 0xc00afc8900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0095e24e0, 0xc017fecc60, 0x73aefc0, 0xc00b96d598, 0xc00afc8900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:42.477305  108888 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0516 00:39:42.477342  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.477397  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.477412  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.477425  108888 healthz.go:184] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.477588  108888 wrap.go:47] GET /healthz: (428.765µs) 500
goroutine 36552 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003655ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003655ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010cb7f80, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc011398180, 0xc007e30c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc011398180, 0xc00af94400)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc011398180, 0xc00af94400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc011398180, 0xc00af94400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc011398180, 0xc00af94400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc011398180, 0xc00af94400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc011398180, 0xc00af94400)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc011398180, 0xc00af94400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc011398180, 0xc00af94400)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc011398180, 0xc00af94400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc011398180, 0xc00af94400)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc011398180, 0xc00af94400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc011398180, 0xc00af94300)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc011398180, 0xc00af94300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0099ab860, 0xc017fecc60, 0x73aefc0, 0xc011398180, 0xc00af94300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.479176  108888 client.go:354] parsed scheme: ""
I0516 00:39:42.479204  108888 client.go:354] scheme "" not registered, fallback to default scheme
I0516 00:39:42.479250  108888 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0516 00:39:42.479323  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:42.479743  108888 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0516 00:39:42.479809  108888 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:39:42.568425  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.568456  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.568467  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.568477  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.568657  108888 wrap.go:47] GET /healthz: (1.731852ms) 500
goroutine 36367 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012cf04d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012cf04d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010e17480, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc0167dc4b0, 0xc0105182c0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc0167dc4b0, 0xc00affa700)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc0167dc4b0, 0xc00affa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc0167dc4b0, 0xc00affa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc0167dc4b0, 0xc00affa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc0167dc4b0, 0xc00affa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc0167dc4b0, 0xc00affa700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc0167dc4b0, 0xc00affa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc0167dc4b0, 0xc00affa700)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc0167dc4b0, 0xc00affa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc0167dc4b0, 0xc00affa700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc0167dc4b0, 0xc00affa700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc0167dc4b0, 0xc00affa600)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc0167dc4b0, 0xc00affa600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00992a6c0, 0xc017fecc60, 0x73aefc0, 0xc0167dc4b0, 0xc00affa600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:42.583836  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.583868  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.583879  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.583887  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.584084  108888 wrap.go:47] GET /healthz: (6.934361ms) 500
goroutine 36525 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003833030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003833030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01081cc80, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00b96d5c0, 0xc013216dc0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9200)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9200)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9100)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00b96d5c0, 0xc00afc9100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0095e2900, 0xc017fecc60, 0x73aefc0, 0xc00b96d5c0, 0xc00afc9100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.668278  108888 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.165975ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.668463  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.668483  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.668493  108888 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0516 00:39:42.668501  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0516 00:39:42.668656  108888 wrap.go:47] GET /healthz: (1.734202ms) 500
goroutine 36448 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00366e770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00366e770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010d21ee0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc017bc65b0, 0xc000110c60, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc017bc65b0, 0xc007d5a200)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc017bc65b0, 0xc007d5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc017bc65b0, 0xc007d5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc017bc65b0, 0xc007d5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc017bc65b0, 0xc007d5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc017bc65b0, 0xc007d5a200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc017bc65b0, 0xc007d5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc017bc65b0, 0xc007d5a200)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc017bc65b0, 0xc007d5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc017bc65b0, 0xc007d5a200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc017bc65b0, 0xc007d5a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc017bc65b0, 0xc0090d1f00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc017bc65b0, 0xc0090d1f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0099a5440, 0xc017fecc60, 0x73aefc0, 0xc017bc65b0, 0xc0090d1f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52584]
I0516 00:39:42.669186  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.826534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:42.669483  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.381639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52404]
I0516 00:39:42.671361  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.482949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52404]
I0516 00:39:42.673335  108888 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.634415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.673547  108888 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0516 00:39:42.673627  108888 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (3.717379ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:42.673655  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.87496ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52404]
I0516 00:39:42.679251  108888 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (5.525556ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.687259  108888 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (13.246415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:42.687618  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (13.608924ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52584]
I0516 00:39:42.687743  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.687758  108888 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0516 00:39:42.687767  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:42.687782  108888 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (7.83577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.687941  108888 wrap.go:47] GET /healthz: (7.061931ms) 500
goroutine 36540 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003663180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003663180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010b352c0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc01620aac0, 0xc002822580, 0x14b, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc01620aac0, 0xc0095d3300)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc01620aac0, 0xc0095d3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc01620aac0, 0xc0095d3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc01620aac0, 0xc0095d3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc01620aac0, 0xc0095d3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc01620aac0, 0xc0095d3300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc01620aac0, 0xc0095d3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc01620aac0, 0xc0095d3300)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc01620aac0, 0xc0095d3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc01620aac0, 0xc0095d3300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc01620aac0, 0xc0095d3300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc01620aac0, 0xc0095d3200)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc01620aac0, 0xc0095d3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009857e60, 0xc017fecc60, 0x73aefc0, 0xc01620aac0, 0xc0095d3200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52586]
I0516 00:39:42.688020  108888 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0516 00:39:42.688040  108888 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0516 00:39:42.690281  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.899512ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.707148  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (16.301833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.709016  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.400934ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.710525  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.137372ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.711984  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (895.801µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.713127  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (869.529µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.715010  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.549405ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.715216  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0516 00:39:42.716646  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.05645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.718466  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.502597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.718654  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0516 00:39:42.719516  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (741.101µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.721479  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.576375ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.721684  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0516 00:39:42.722831  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (942.231µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.724473  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.294344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.724675  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0516 00:39:42.725548  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (724.926µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.727188  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.370127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.727390  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin
I0516 00:39:42.728217  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (694.617µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.730452  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.919989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.730712  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit
I0516 00:39:42.731909  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.029671ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.734565  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.26663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.734808  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view
I0516 00:39:42.735980  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (953.899µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.737988  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.646587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.738296  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0516 00:39:42.739401  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (927.305µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.741702  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.831242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.741972  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0516 00:39:42.743001  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (889.285µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.745064  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.692358ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.745339  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0516 00:39:42.746518  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (917.635µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.748737  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.832075ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.748906  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0516 00:39:42.749966  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (902.7µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.753013  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.323316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.753311  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node
I0516 00:39:42.754521  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (965.796µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.756521  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.628879ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.756702  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0516 00:39:42.757904  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.011265ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.759718  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.414139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.759972  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0516 00:39:42.761072  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (872.464µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.764720  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.286963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.764995  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0516 00:39:42.766787  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.50956ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.768960  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.768990  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:42.769138  108888 wrap.go:47] GET /healthz: (2.245321ms) 500
goroutine 36708 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003d9ee00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003d9ee00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0107cbae0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00b96d9a8, 0xc00335aa00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00b96d9a8, 0xc00627b700)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00b96d9a8, 0xc00627b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00b96d9a8, 0xc00627b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00b96d9a8, 0xc00627b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00b96d9a8, 0xc00627b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00b96d9a8, 0xc00627b700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00b96d9a8, 0xc00627b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00b96d9a8, 0xc00627b700)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00b96d9a8, 0xc00627b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00b96d9a8, 0xc00627b700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00b96d9a8, 0xc00627b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00b96d9a8, 0xc00627b100)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00b96d9a8, 0xc00627b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007104a80, 0xc017fecc60, 0x73aefc0, 0xc00b96d9a8, 0xc00627b100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52582]
I0516 00:39:42.769698  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.224134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.769852  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0516 00:39:42.771157  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.083958ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.773245  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.633938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.773491  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0516 00:39:42.774725  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.044237ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.776351  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.267112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.776587  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0516 00:39:42.784549  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.784584  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:42.784775  108888 wrap.go:47] GET /healthz: (3.165362ms) 500
goroutine 36722 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003c7e540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003c7e540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01072fa00, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc01620b010, 0xc002c9c8c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc01620b010, 0xc005427400)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc01620b010, 0xc005427400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc01620b010, 0xc005427400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc01620b010, 0xc005427400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc01620b010, 0xc005427400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc01620b010, 0xc005427400)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc01620b010, 0xc005427400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc01620b010, 0xc005427400)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc01620b010, 0xc005427400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc01620b010, 0xc005427400)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc01620b010, 0xc005427400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc01620b010, 0xc005427300)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc01620b010, 0xc005427300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc006a98cc0, 0xc017fecc60, 0x73aefc0, 0xc01620b010, 0xc005427300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:42.801228  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (8.15328ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.803745  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.932947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.804172  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0516 00:39:42.806672  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (2.207075ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.810807  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.614229ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.811107  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0516 00:39:42.812296  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.012023ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.814144  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.508958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.814380  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0516 00:39:42.815437  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (854.758µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.817159  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.375723ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.817446  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0516 00:39:42.818465  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (847.175µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.820250  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.372394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.820837  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0516 00:39:42.821848  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (751.666µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.823740  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.573581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.823937  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0516 00:39:42.831976  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (7.754884ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.835082  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.06794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.835295  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0516 00:39:42.837084  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.548236ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.839597  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.824775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.839840  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0516 00:39:42.841339  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.283935ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.843701  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.845307ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.843907  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0516 00:39:42.847742  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (3.539946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.850289  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.047941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.850696  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0516 00:39:42.851992  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.123157ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.854382  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.974153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.854705  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0516 00:39:42.856069  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.184102ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.858255  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.582156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.858554  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0516 00:39:42.859622  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (865.239µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.861718  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.673347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.861982  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0516 00:39:42.863028  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (852.931µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.864806  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.429408ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.865090  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0516 00:39:42.866229  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (879.339µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.868312  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.583974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.868539  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0516 00:39:42.868706  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.868729  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:42.868881  108888 wrap.go:47] GET /healthz: (2.182162ms) 500
goroutine 36751 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004ba5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004ba5110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0104ddca0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00c45c128, 0xc0048da500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00c45c128, 0xc00425c600)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00c45c128, 0xc00425c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00c45c128, 0xc00425c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00c45c128, 0xc00425c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00c45c128, 0xc00425c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00c45c128, 0xc00425c600)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00c45c128, 0xc00425c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00c45c128, 0xc00425c600)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00c45c128, 0xc00425c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00c45c128, 0xc00425c600)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00c45c128, 0xc00425c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00c45c128, 0xc00425c500)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00c45c128, 0xc00425c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005dca3c0, 0xc017fecc60, 0x73aefc0, 0xc00c45c128, 0xc00425c500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52582]
I0516 00:39:42.869948  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (862.306µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.872091  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.8193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.872364  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0516 00:39:42.873986  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.356586ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.877636  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.877661  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:42.877805  108888 wrap.go:47] GET /healthz: (938.815µs) 500
goroutine 36779 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004d028c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004d028c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01041ef40, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc015c257d8, 0xc0030f6500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc015c257d8, 0xc00413b700)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc015c257d8, 0xc00413b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc015c257d8, 0xc00413b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc015c257d8, 0xc00413b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc015c257d8, 0xc00413b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc015c257d8, 0xc00413b700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc015c257d8, 0xc00413b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc015c257d8, 0xc00413b700)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc015c257d8, 0xc00413b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc015c257d8, 0xc00413b700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc015c257d8, 0xc00413b700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc015c257d8, 0xc00413b600)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc015c257d8, 0xc00413b600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005416de0, 0xc017fecc60, 0x73aefc0, 0xc015c257d8, 0xc00413b600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:42.879334  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.925592ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.879641  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0516 00:39:42.880832  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (877.507µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.883191  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.975214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.883435  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0516 00:39:42.884611  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (983.968µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.886944  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.901296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.887158  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0516 00:39:42.888219  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (877.207µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.889950  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.365654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.890198  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0516 00:39:42.891654  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (945.236µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.893649  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.592898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.893862  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0516 00:39:42.894970  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (889.248µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.896753  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.349757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.897040  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0516 00:39:42.898125  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (873.324µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.899906  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.408121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.900125  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0516 00:39:42.901120  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (844.733µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.903297  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.70129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.903479  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0516 00:39:42.904513  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (827.113µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.906934  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.875505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.907161  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0516 00:39:42.908302  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (921.536µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.910481  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.776497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.910717  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0516 00:39:42.911802  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (902.31µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.913627  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.37218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.913815  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0516 00:39:42.914903  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (872.63µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.916950  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.567984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.917149  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0516 00:39:42.918325  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (992.336µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.920271  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.536935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.920443  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0516 00:39:42.921552  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (909.753µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.924269  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.366493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.924784  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0516 00:39:42.926165  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.133321ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.929125  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.210823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.929391  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0516 00:39:42.931019  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.415816ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.933768  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.392176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.934466  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0516 00:39:42.938136  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (3.406284ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.940625  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.819351ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.940882  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0516 00:39:42.942074  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (849.798µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.944478  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.029026ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.944677  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0516 00:39:42.947464  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.179937ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.968436  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.090005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:42.968739  108888 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0516 00:39:42.968750  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.968852  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:42.969049  108888 wrap.go:47] GET /healthz: (2.261757ms) 500
goroutine 36816 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc001d05260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc001d05260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01010e900, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc0113986f0, 0xc0048dac80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc0113986f0, 0xc001011200)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc0113986f0, 0xc001011200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc0113986f0, 0xc001011200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc0113986f0, 0xc001011200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc0113986f0, 0xc001011200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc0113986f0, 0xc001011200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc0113986f0, 0xc001011200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc0113986f0, 0xc001011200)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc0113986f0, 0xc001011200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc0113986f0, 0xc001011200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc0113986f0, 0xc001011200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc0113986f0, 0xc001011100)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc0113986f0, 0xc001011100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc002601620, 0xc017fecc60, 0x73aefc0, 0xc0113986f0, 0xc001011100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52582]
I0516 00:39:42.978301  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:42.978334  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:42.978494  108888 wrap.go:47] GET /healthz: (1.391469ms) 500
goroutine 36859 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004d03e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004d03e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0101a99e0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc015c25bc8, 0xc002c9d400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc015c25bc8, 0xc001713600)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc015c25bc8, 0xc001713600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc015c25bc8, 0xc001713600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc015c25bc8, 0xc001713600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc015c25bc8, 0xc001713600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc015c25bc8, 0xc001713600)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc015c25bc8, 0xc001713600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc015c25bc8, 0xc001713600)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc015c25bc8, 0xc001713600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc015c25bc8, 0xc001713600)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc015c25bc8, 0xc001713600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc015c25bc8, 0xc001713500)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc015c25bc8, 0xc001713500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc007606420, 0xc017fecc60, 0x73aefc0, 0xc015c25bc8, 0xc001713500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:42.987601  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.276621ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.008727  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.350011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.009837  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0516 00:39:43.028074  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.696613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.051100  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.706125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.051607  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0516 00:39:43.067428  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.067510  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.067702  108888 wrap.go:47] GET /healthz: (975.629µs) 500
goroutine 36898 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0027d0a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0027d0a80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01003f860, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00c45cfb0, 0xc004078dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00c45cfb0, 0xc002c8ab00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00c45cfb0, 0xc002c8ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00c45cfb0, 0xc002c8ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00c45cfb0, 0xc002c8ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00c45cfb0, 0xc002c8ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00c45cfb0, 0xc002c8ab00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00c45cfb0, 0xc002c8ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00c45cfb0, 0xc002c8ab00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00c45cfb0, 0xc002c8ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00c45cfb0, 0xc002c8ab00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00c45cfb0, 0xc002c8ab00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00c45cfb0, 0xc002c8aa00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00c45cfb0, 0xc002c8aa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004910a80, 0xc017fecc60, 0x73aefc0, 0xc00c45cfb0, 0xc002c8aa00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:43.067710  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.411542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.078351  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.078385  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.078603  108888 wrap.go:47] GET /healthz: (1.072079ms) 500
goroutine 36864 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00287c540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00287c540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010016660, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc015c25cf0, 0xc0040792c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc015c25cf0, 0xc002d8c500)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc015c25cf0, 0xc002d8c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc015c25cf0, 0xc002d8c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc015c25cf0, 0xc002d8c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc015c25cf0, 0xc002d8c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc015c25cf0, 0xc002d8c500)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc015c25cf0, 0xc002d8c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc015c25cf0, 0xc002d8c500)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc015c25cf0, 0xc002d8c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc015c25cf0, 0xc002d8c500)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc015c25cf0, 0xc002d8c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc015c25cf0, 0xc002d8c400)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc015c25cf0, 0xc002d8c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008139020, 0xc017fecc60, 0x73aefc0, 0xc015c25cf0, 0xc002d8c400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.090789  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.124802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.091093  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0516 00:39:43.112103  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.494084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.129313  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.984062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.129634  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0516 00:39:43.147751  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.399811ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.168705  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.320651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.168747  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.168899  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.169093  108888 wrap.go:47] GET /healthz: (2.242875ms) 500
goroutine 36930 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00287c770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00287c770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010016b40, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc015c25d78, 0xc0038d7400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc015c25d78, 0xc002d8d100)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc015c25d78, 0xc002d8d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc015c25d78, 0xc002d8d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc015c25d78, 0xc002d8d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc015c25d78, 0xc002d8d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc015c25d78, 0xc002d8d100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc015c25d78, 0xc002d8d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc015c25d78, 0xc002d8d100)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc015c25d78, 0xc002d8d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc015c25d78, 0xc002d8d100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc015c25d78, 0xc002d8d100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc015c25d78, 0xc002d8d000)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc015c25d78, 0xc002d8d000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0086f1740, 0xc017fecc60, 0x73aefc0, 0xc015c25d78, 0xc002d8d000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:43.169132  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0516 00:39:43.178545  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.178676  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.178870  108888 wrap.go:47] GET /healthz: (1.634301ms) 500
goroutine 36932 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00287caf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00287caf0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010016fa0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc015c25df0, 0xc0048db2c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc015c25df0, 0xc002d8d700)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc015c25df0, 0xc002d8d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc015c25df0, 0xc002d8d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc015c25df0, 0xc002d8d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc015c25df0, 0xc002d8d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc015c25df0, 0xc002d8d700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc015c25df0, 0xc002d8d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc015c25df0, 0xc002d8d700)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc015c25df0, 0xc002d8d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc015c25df0, 0xc002d8d700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc015c25df0, 0xc002d8d700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc015c25df0, 0xc002d8d400)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc015c25df0, 0xc002d8d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008a85860, 0xc017fecc60, 0x73aefc0, 0xc015c25df0, 0xc002d8d400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.188273  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.921585ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.208296  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.987648ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.208581  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0516 00:39:43.227910  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.577235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.251552  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.536541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.252510  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0516 00:39:43.267366  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.050916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.274129  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.274226  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.274424  108888 wrap.go:47] GET /healthz: (7.606999ms) 500
goroutine 36938 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00287cfc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00287cfc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00feee340, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc015c25eb8, 0xc002c9da40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc015c25eb8, 0xc002f1c700)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc015c25eb8, 0xc002f1c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc015c25eb8, 0xc002f1c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc015c25eb8, 0xc002f1c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc015c25eb8, 0xc002f1c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc015c25eb8, 0xc002f1c700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc015c25eb8, 0xc002f1c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc015c25eb8, 0xc002f1c700)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc015c25eb8, 0xc002f1c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc015c25eb8, 0xc002f1c700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc015c25eb8, 0xc002f1c700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc015c25eb8, 0xc002f1c600)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc015c25eb8, 0xc002f1c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008ea7440, 0xc017fecc60, 0x73aefc0, 0xc015c25eb8, 0xc002f1c600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52582]
I0516 00:39:43.278036  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.278142  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.278353  108888 wrap.go:47] GET /healthz: (1.397437ms) 500
goroutine 36947 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc004a450a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc004a450a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fefa6c0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc0167dd2c0, 0xc00396b040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0100)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0100)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0000)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc0167dd2c0, 0xc0031c0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0090147e0, 0xc017fecc60, 0x73aefc0, 0xc0167dd2c0, 0xc0031c0000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.290749  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.919949ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.291042  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0516 00:39:43.307782  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.341581ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.329054  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.643308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.332864  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0516 00:39:43.358245  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (2.180849ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.368321  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.680401ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.368627  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0516 00:39:43.369374  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.369406  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.369557  108888 wrap.go:47] GET /healthz: (2.815711ms) 500
goroutine 36888 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00297ca10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00297ca10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fecc200, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc011398d30, 0xc004079900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc011398d30, 0xc002c2fc00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc011398d30, 0xc002c2fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc011398d30, 0xc002c2fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc011398d30, 0xc002c2fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc011398d30, 0xc002c2fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc011398d30, 0xc002c2fc00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc011398d30, 0xc002c2fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc011398d30, 0xc002c2fc00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc011398d30, 0xc002c2fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc011398d30, 0xc002c2fc00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc011398d30, 0xc002c2fc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc011398d30, 0xc002c2fb00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc011398d30, 0xc002c2fb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009ba0120, 0xc017fecc60, 0x73aefc0, 0xc011398d30, 0xc002c2fb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:43.378312  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.378355  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.378566  108888 wrap.go:47] GET /healthz: (1.515117ms) 500
goroutine 36604 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0028f52d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0028f52d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00feeb300, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc017bc69d8, 0xc002da8280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc017bc69d8, 0xc002e5b000)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc017bc69d8, 0xc002e5b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc017bc69d8, 0xc002e5b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc017bc69d8, 0xc002e5b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc017bc69d8, 0xc002e5b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc017bc69d8, 0xc002e5b000)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc017bc69d8, 0xc002e5b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc017bc69d8, 0xc002e5b000)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc017bc69d8, 0xc002e5b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc017bc69d8, 0xc002e5b000)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc017bc69d8, 0xc002e5b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc017bc69d8, 0xc002e5ae00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc017bc69d8, 0xc002e5ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008f829c0, 0xc017fecc60, 0x73aefc0, 0xc017bc69d8, 0xc002e5ae00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.387718  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.41601ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.412827  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.088716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.413093  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0516 00:39:43.429318  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.916098ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.449400  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.073388ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.449720  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0516 00:39:43.468817  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.468852  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.469048  108888 wrap.go:47] GET /healthz: (1.027206ms) 500
goroutine 36965 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00276a690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00276a690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0100b6560, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc017f30e18, 0xc0030f6b40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc017f30e18, 0xc0033d8500)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc017f30e18, 0xc0033d8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc017f30e18, 0xc0033d8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc017f30e18, 0xc0033d8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc017f30e18, 0xc0033d8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc017f30e18, 0xc0033d8500)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc017f30e18, 0xc0033d8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc017f30e18, 0xc0033d8500)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc017f30e18, 0xc0033d8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc017f30e18, 0xc0033d8500)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc017f30e18, 0xc0033d8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc017f30e18, 0xc0033d8400)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc017f30e18, 0xc0033d8400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc005a69c80, 0xc017fecc60, 0x73aefc0, 0xc017f30e18, 0xc0033d8400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52582]
I0516 00:39:43.469477  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (2.572265ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.477950  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.477980  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.478203  108888 wrap.go:47] GET /healthz: (1.119654ms) 500
goroutine 36895 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00297d030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00297d030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fecd3c0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc011398e40, 0xc003228a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc011398e40, 0xc00323b100)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc011398e40, 0xc00323b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc011398e40, 0xc00323b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc011398e40, 0xc00323b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc011398e40, 0xc00323b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc011398e40, 0xc00323b100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc011398e40, 0xc00323b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc011398e40, 0xc00323b100)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc011398e40, 0xc00323b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc011398e40, 0xc00323b100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc011398e40, 0xc00323b100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc011398e40, 0xc00323b000)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc011398e40, 0xc00323b000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009ba11a0, 0xc017fecc60, 0x73aefc0, 0xc011398e40, 0xc00323b000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.488512  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.187822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.488745  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0516 00:39:43.516324  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (6.218338ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.528562  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.270946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.528853  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0516 00:39:43.547828  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.49195ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.568282  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.568318  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.568482  108888 wrap.go:47] GET /healthz: (1.656526ms) 500
goroutine 36847 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0027277a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0027277a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010188860, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc016144b40, 0xc00335b180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc016144b40, 0xc003516000)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc016144b40, 0xc003516000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc016144b40, 0xc003516000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc016144b40, 0xc003516000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc016144b40, 0xc003516000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc016144b40, 0xc003516000)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc016144b40, 0xc003516000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc016144b40, 0xc003516000)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc016144b40, 0xc003516000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc016144b40, 0xc003516000)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc016144b40, 0xc003516000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc016144b40, 0xc003553f00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc016144b40, 0xc003553f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0052e6b40, 0xc017fecc60, 0x73aefc0, 0xc016144b40, 0xc003553f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52582]
I0516 00:39:43.568680  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.3786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.568874  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0516 00:39:43.581656  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.581695  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.582744  108888 wrap.go:47] GET /healthz: (2.493218ms) 500
goroutine 36849 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0027278f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0027278f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc010188f40, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc016144b80, 0xc0048db900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc016144b80, 0xc003516f00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc016144b80, 0xc003516f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc016144b80, 0xc003516f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc016144b80, 0xc003516f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc016144b80, 0xc003516f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc016144b80, 0xc003516f00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc016144b80, 0xc003516f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc016144b80, 0xc003516f00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc016144b80, 0xc003516f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc016144b80, 0xc003516f00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc016144b80, 0xc003516f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc016144b80, 0xc003516b00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc016144b80, 0xc003516b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0052e6e40, 0xc017fecc60, 0x73aefc0, 0xc016144b80, 0xc003516b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.587939  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.623848ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.609514  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.189941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.609776  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0516 00:39:43.628059  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.633897ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.648981  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.6103ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.649401  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
E0516 00:39:43.660602  108888 event.go:249] Unable to write event: 'Patch http://127.0.0.1:39511/api/v1/namespaces/permit-pluginfbb36f84-3285-4ca3-b4b1-f830b43b5b8a/events/test-pod.159f02db84c7e23f: dial tcp 127.0.0.1:39511: connect: connection refused' (may retry after sleeping)
I0516 00:39:43.668698  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.668744  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.668944  108888 wrap.go:47] GET /healthz: (2.046818ms) 500
goroutine 36983 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00297dc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00297dc70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fde75a0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc011399088, 0xc00335b7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc011399088, 0xc003559c00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc011399088, 0xc003559c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc011399088, 0xc003559c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc011399088, 0xc003559c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc011399088, 0xc003559c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc011399088, 0xc003559c00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc011399088, 0xc003559c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc011399088, 0xc003559c00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc011399088, 0xc003559c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc011399088, 0xc003559c00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc011399088, 0xc003559c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc011399088, 0xc003559b00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc011399088, 0xc003559b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009025200, 0xc017fecc60, 0x73aefc0, 0xc011399088, 0xc003559b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52582]
I0516 00:39:43.669009  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.67612ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.678316  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.678356  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.678525  108888 wrap.go:47] GET /healthz: (1.394041ms) 500
goroutine 37004 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002abf340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002abf340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fd947e0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00c45d628, 0xc00335bcc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00c45d628, 0xc003fdb800)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00c45d628, 0xc003fdb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00c45d628, 0xc003fdb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00c45d628, 0xc003fdb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00c45d628, 0xc003fdb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00c45d628, 0xc003fdb800)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00c45d628, 0xc003fdb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00c45d628, 0xc003fdb800)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00c45d628, 0xc003fdb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00c45d628, 0xc003fdb800)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00c45d628, 0xc003fdb800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00c45d628, 0xc003fdb700)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00c45d628, 0xc003fdb700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a2700c0, 0xc017fecc60, 0x73aefc0, 0xc00c45d628, 0xc003fdb700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.689289  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.727228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.689577  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0516 00:39:43.708288  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.6665ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.729149  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.784352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.729409  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0516 00:39:43.747934  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.489536ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.769469  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.206352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.769722  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0516 00:39:43.770801  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.770826  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.771051  108888 wrap.go:47] GET /healthz: (1.315055ms) 500
goroutine 37009 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002abf960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002abf960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fd95a20, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00c45d730, 0xc000078dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00c45d730, 0xc0055f6300)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00c45d730, 0xc0055f6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00c45d730, 0xc0055f6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00c45d730, 0xc0055f6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00c45d730, 0xc0055f6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00c45d730, 0xc0055f6300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00c45d730, 0xc0055f6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00c45d730, 0xc0055f6300)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00c45d730, 0xc0055f6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00c45d730, 0xc0055f6300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00c45d730, 0xc0055f6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00c45d730, 0xc0055f6100)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00c45d730, 0xc0055f6100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a2715c0, 0xc017fecc60, 0x73aefc0, 0xc00c45d730, 0xc0055f6100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52582]
I0516 00:39:43.778294  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.778411  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.778733  108888 wrap.go:47] GET /healthz: (1.646936ms) 500
goroutine 37034 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002b8e1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002b8e1c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fd688c0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc016144d50, 0xc003229400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc016144d50, 0xc005600100)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc016144d50, 0xc005600100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc016144d50, 0xc005600100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc016144d50, 0xc005600100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc016144d50, 0xc005600100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc016144d50, 0xc005600100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc016144d50, 0xc005600100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc016144d50, 0xc005600100)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc016144d50, 0xc005600100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc016144d50, 0xc005600100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc016144d50, 0xc005600100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc016144d50, 0xc005600000)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc016144d50, 0xc005600000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a1049c0, 0xc017fecc60, 0x73aefc0, 0xc016144d50, 0xc005600000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.788114  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.749366ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.808711  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.3465ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.809138  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0516 00:39:43.853429  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (26.913658ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.871961  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.871996  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.872180  108888 wrap.go:47] GET /healthz: (5.28387ms) 500
goroutine 37046 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002abfe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002abfe30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fd0ac00, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00c45d858, 0xc000079680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00c45d858, 0xc00562a300)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00c45d858, 0xc00562a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00c45d858, 0xc00562a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00c45d858, 0xc00562a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00c45d858, 0xc00562a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00c45d858, 0xc00562a300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00c45d858, 0xc00562a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00c45d858, 0xc00562a300)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00c45d858, 0xc00562a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00c45d858, 0xc00562a300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00c45d858, 0xc00562a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00c45d858, 0xc00562a100)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00c45d858, 0xc00562a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b0cc540, 0xc017fecc60, 0x73aefc0, 0xc00c45d858, 0xc00562a100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:43.873250  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (17.349467ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.873581  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0516 00:39:43.887353  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (13.559761ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.887841  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.887871  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.888233  108888 wrap.go:47] GET /healthz: (10.959928ms) 500
goroutine 36992 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002cd2e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002cd2e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c6382e0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc011399380, 0xc002da8c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc011399380, 0xc00563c100)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc011399380, 0xc00563c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc011399380, 0xc00563c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc011399380, 0xc00563c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc011399380, 0xc00563c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc011399380, 0xc00563c100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc011399380, 0xc00563c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc011399380, 0xc00563c100)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc011399380, 0xc00563c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc011399380, 0xc00563c100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc011399380, 0xc00563c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc011399380, 0xc00563c000)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc011399380, 0xc00563c000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00a02d6e0, 0xc017fecc60, 0x73aefc0, 0xc011399380, 0xc00563c000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:43.891101  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.021915ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.891299  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0516 00:39:43.911264  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (4.956934ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.929294  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.029675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.929839  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0516 00:39:43.947704  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.422196ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.968347  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.968381  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.968559  108888 wrap.go:47] GET /healthz: (1.263003ms) 500
goroutine 36958 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002f76380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002f76380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fd22d60, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc0167dd518, 0xc00396b900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc0167dd518, 0xc0061dc200)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc0167dd518, 0xc0061dc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc0167dd518, 0xc0061dc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc0167dd518, 0xc0061dc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc0167dd518, 0xc0061dc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc0167dd518, 0xc0061dc200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc0167dd518, 0xc0061dc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc0167dd518, 0xc0061dc200)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc0167dd518, 0xc0061dc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc0167dd518, 0xc0061dc200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc0167dd518, 0xc0061dc200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc0167dd518, 0xc0061dc100)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc0167dd518, 0xc0061dc100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c2ac0c0, 0xc017fecc60, 0x73aefc0, 0xc0167dd518, 0xc0061dc100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:43.969375  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.958537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.969564  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0516 00:39:43.981447  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:43.981480  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:43.981649  108888 wrap.go:47] GET /healthz: (1.598417ms) 500
goroutine 37090 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002f764d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002f764d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fd232a0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc0167dd540, 0xc003184c80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc0167dd540, 0xc0061dc800)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc0167dd540, 0xc0061dc800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc0167dd540, 0xc0061dc800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc0167dd540, 0xc0061dc800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc0167dd540, 0xc0061dc800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc0167dd540, 0xc0061dc800)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc0167dd540, 0xc0061dc800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc0167dd540, 0xc0061dc800)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc0167dd540, 0xc0061dc800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc0167dd540, 0xc0061dc800)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc0167dd540, 0xc0061dc800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc0167dd540, 0xc0061dc700)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc0167dd540, 0xc0061dc700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c2ac480, 0xc017fecc60, 0x73aefc0, 0xc0167dd540, 0xc0061dc700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:43.987496  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.285033ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.008945  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.583228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.009208  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0516 00:39:44.027929  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.57429ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.048644  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.324073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.048908  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0516 00:39:44.070736  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.070769  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.070788  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.273571ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.070969  108888 wrap.go:47] GET /healthz: (2.502227ms) 500
goroutine 37115 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0028f59d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0028f59d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fe4c840, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc017bc6d18, 0xc002756500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc017bc6d18, 0xc002e5bc00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc017bc6d18, 0xc002e5bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc017bc6d18, 0xc002e5bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc017bc6d18, 0xc002e5bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc017bc6d18, 0xc002e5bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc017bc6d18, 0xc002e5bc00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc017bc6d18, 0xc002e5bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc017bc6d18, 0xc002e5bc00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc017bc6d18, 0xc002e5bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc017bc6d18, 0xc002e5bc00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc017bc6d18, 0xc002e5bc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc017bc6d18, 0xc002e5bb00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc017bc6d18, 0xc002e5bb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008f83aa0, 0xc017fecc60, 0x73aefc0, 0xc017bc6d18, 0xc002e5bb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:44.078062  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.078097  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.078301  108888 wrap.go:47] GET /healthz: (1.2816ms) 500
goroutine 37117 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0028f5ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0028f5ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00fe4caa0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc017bc6d48, 0xc0101ccb40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc017bc6d48, 0xc00782a300)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc017bc6d48, 0xc00782a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc017bc6d48, 0xc00782a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc017bc6d48, 0xc00782a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc017bc6d48, 0xc00782a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc017bc6d48, 0xc00782a300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc017bc6d48, 0xc00782a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc017bc6d48, 0xc00782a300)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc017bc6d48, 0xc00782a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc017bc6d48, 0xc00782a300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc017bc6d48, 0xc00782a300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc017bc6d48, 0xc00782a200)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc017bc6d48, 0xc00782a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc008f83e00, 0xc017fecc60, 0x73aefc0, 0xc017bc6d48, 0xc00782a200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.089166  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.835466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.089448  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0516 00:39:44.107827  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.478224ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.131455  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.118182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.131751  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0516 00:39:44.147891  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.583472ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.168725  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.168803  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.169022  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.691745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.169027  108888 wrap.go:47] GET /healthz: (2.224011ms) 500
goroutine 37139 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0031aa7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0031aa7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c6d59e0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc01620b9a8, 0xc002756a00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc01620b9a8, 0xc00755d300)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc01620b9a8, 0xc00755d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc01620b9a8, 0xc00755d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc01620b9a8, 0xc00755d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc01620b9a8, 0xc00755d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc01620b9a8, 0xc00755d300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc01620b9a8, 0xc00755d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc01620b9a8, 0xc00755d300)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc01620b9a8, 0xc00755d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc01620b9a8, 0xc00755d300)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc01620b9a8, 0xc00755d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc01620b9a8, 0xc00755d200)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc01620b9a8, 0xc00755d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b1a4480, 0xc017fecc60, 0x73aefc0, 0xc01620b9a8, 0xc00755d200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52582]
I0516 00:39:44.169365  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0516 00:39:44.178125  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.178232  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.178447  108888 wrap.go:47] GET /healthz: (1.423526ms) 500
goroutine 37121 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00358a540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00358a540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c5903e0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc017bc6eb8, 0xc00423c780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc017bc6eb8, 0xc00782bd00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc017bc6eb8, 0xc00782bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc017bc6eb8, 0xc00782bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc017bc6eb8, 0xc00782bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc017bc6eb8, 0xc00782bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc017bc6eb8, 0xc00782bd00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc017bc6eb8, 0xc00782bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc017bc6eb8, 0xc00782bd00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc017bc6eb8, 0xc00782bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc017bc6eb8, 0xc00782bd00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc017bc6eb8, 0xc00782bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc017bc6eb8, 0xc00782b900)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc017bc6eb8, 0xc00782b900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009c74900, 0xc017fecc60, 0x73aefc0, 0xc017bc6eb8, 0xc00782b900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.189972  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.512136ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.208791  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.335003ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.210884  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0516 00:39:44.227962  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.617397ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.248295  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.027663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.248591  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0516 00:39:44.268749  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.268779  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.268965  108888 wrap.go:47] GET /healthz: (1.338968ms) 500
goroutine 37141 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0031aaf50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0031aaf50, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c5208a0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc01620bad0, 0xc0038d7e00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc01620bad0, 0xc002bde000)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc01620bad0, 0xc002bde000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc01620bad0, 0xc002bde000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc01620bad0, 0xc002bde000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc01620bad0, 0xc002bde000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc01620bad0, 0xc002bde000)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc01620bad0, 0xc002bde000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc01620bad0, 0xc002bde000)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc01620bad0, 0xc002bde000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc01620bad0, 0xc002bde000)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc01620bad0, 0xc002bde000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc01620bad0, 0xc002921800)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc01620bad0, 0xc002921800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b1a4e40, 0xc017fecc60, 0x73aefc0, 0xc01620bad0, 0xc002921800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:44.269403  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (3.076053ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.278026  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.278056  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.278228  108888 wrap.go:47] GET /healthz: (1.244796ms) 500
goroutine 37174 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0039aa230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0039aa230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c56b180, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc0161451b0, 0xc002757040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc0161451b0, 0xc007924e00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc0161451b0, 0xc007924e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc0161451b0, 0xc007924e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc0161451b0, 0xc007924e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc0161451b0, 0xc007924e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc0161451b0, 0xc007924e00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc0161451b0, 0xc007924e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc0161451b0, 0xc007924e00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc0161451b0, 0xc007924e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc0161451b0, 0xc007924e00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc0161451b0, 0xc007924e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc0161451b0, 0xc007924d00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc0161451b0, 0xc007924d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b20ae40, 0xc017fecc60, 0x73aefc0, 0xc0161451b0, 0xc007924d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.288734  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.479824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.289046  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0516 00:39:44.307700  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.433838ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.328418  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.073748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.328683  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0516 00:39:44.347812  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.514858ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.368602  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.368651  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.368792  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.416085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.368954  108888 wrap.go:47] GET /healthz: (1.373279ms) 500
goroutine 37130 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003ab25b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003ab25b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c679e00, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc017f31348, 0xc0030f7680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc017f31348, 0xc00849c100)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc017f31348, 0xc00849c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc017f31348, 0xc00849c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc017f31348, 0xc00849c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc017f31348, 0xc00849c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc017f31348, 0xc00849c100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc017f31348, 0xc00849c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc017f31348, 0xc00849c100)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc017f31348, 0xc00849c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc017f31348, 0xc00849c100)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc017f31348, 0xc00849c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc017f31348, 0xc00849c000)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc017f31348, 0xc00849c000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c66c360, 0xc017fecc60, 0x73aefc0, 0xc017f31348, 0xc00849c000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:44.369452  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0516 00:39:44.380380  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.380415  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.380690  108888 wrap.go:47] GET /healthz: (1.450752ms) 500
goroutine 37176 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0039aa930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0039aa930, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c44a0a0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc0161452d8, 0xc00423cc80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc0161452d8, 0xc007925e00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc0161452d8, 0xc007925e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc0161452d8, 0xc007925e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc0161452d8, 0xc007925e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc0161452d8, 0xc007925e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc0161452d8, 0xc007925e00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc0161452d8, 0xc007925e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc0161452d8, 0xc007925e00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc0161452d8, 0xc007925e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc0161452d8, 0xc007925e00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc0161452d8, 0xc007925e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc0161452d8, 0xc007925d00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc0161452d8, 0xc007925d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b20b380, 0xc017fecc60, 0x73aefc0, 0xc0161452d8, 0xc007925d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.387871  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.603694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.408492  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.23469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.408903  108888 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0516 00:39:44.427802  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.464141ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.429830  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.438476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.449357  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.046958ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.449657  108888 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0516 00:39:44.473210  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.473239  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.473430  108888 wrap.go:47] GET /healthz: (6.450931ms) 500
goroutine 37162 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00358ad90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00358ad90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c4cc260, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc017bc7058, 0xc00423d180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc017bc7058, 0xc0083f1700)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc017bc7058, 0xc0083f1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc017bc7058, 0xc0083f1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc017bc7058, 0xc0083f1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc017bc7058, 0xc0083f1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc017bc7058, 0xc0083f1700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc017bc7058, 0xc0083f1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc017bc7058, 0xc0083f1700)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc017bc7058, 0xc0083f1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc017bc7058, 0xc0083f1700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc017bc7058, 0xc0083f1700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc017bc7058, 0xc0083f1500)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc017bc7058, 0xc0083f1500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cc96060, 0xc017fecc60, 0x73aefc0, 0xc017bc7058, 0xc0083f1500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:44.475314  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (2.309655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.477105  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.377469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.478284  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.478335  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.478577  108888 wrap.go:47] GET /healthz: (1.658248ms) 500
goroutine 37210 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc002cd3880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc002cd3880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c3fc160, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc011399578, 0xc0039c2780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc011399578, 0xc009d5c600)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc011399578, 0xc009d5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc011399578, 0xc009d5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc011399578, 0xc009d5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc011399578, 0xc009d5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc011399578, 0xc009d5c600)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc011399578, 0xc009d5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc011399578, 0xc009d5c600)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc011399578, 0xc009d5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc011399578, 0xc009d5c600)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc011399578, 0xc009d5c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc011399578, 0xc009d5c500)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc011399578, 0xc009d5c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cc89680, 0xc017fecc60, 0x73aefc0, 0xc011399578, 0xc009d5c500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.488418  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.090822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.488643  108888 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0516 00:39:44.507264  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.02786ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.509200  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.289599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.528259  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.954118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.528507  108888 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0516 00:39:44.547373  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.004403ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.549106  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.155445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.569027  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.569064  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.569223  108888 wrap.go:47] GET /healthz: (1.302784ms) 500
goroutine 37164 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00358afc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00358afc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c4cc560, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc017bc7098, 0xc00423d7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc017bc7098, 0xc0083f1c00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc017bc7098, 0xc0083f1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc017bc7098, 0xc0083f1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc017bc7098, 0xc0083f1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc017bc7098, 0xc0083f1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc017bc7098, 0xc0083f1c00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc017bc7098, 0xc0083f1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc017bc7098, 0xc0083f1c00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc017bc7098, 0xc0083f1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc017bc7098, 0xc0083f1c00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc017bc7098, 0xc0083f1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc017bc7098, 0xc0083f1a00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc017bc7098, 0xc0083f1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cc96480, 0xc017fecc60, 0x73aefc0, 0xc017bc7098, 0xc0083f1a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52582]
I0516 00:39:44.569559  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.243544ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.569737  108888 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0516 00:39:44.577831  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.577863  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.578059  108888 wrap.go:47] GET /healthz: (1.048962ms) 500
goroutine 37195 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003ee6460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003ee6460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c411680, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc0167dd988, 0xc002757680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc0167dd988, 0xc009e92900)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc0167dd988, 0xc009e92900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc0167dd988, 0xc009e92900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc0167dd988, 0xc009e92900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc0167dd988, 0xc009e92900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc0167dd988, 0xc009e92900)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc0167dd988, 0xc009e92900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc0167dd988, 0xc009e92900)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc0167dd988, 0xc009e92900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc0167dd988, 0xc009e92900)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc0167dd988, 0xc009e92900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc0167dd988, 0xc009e92800)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc0167dd988, 0xc009e92800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ba377a0, 0xc017fecc60, 0x73aefc0, 0xc0167dd988, 0xc009e92800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.587904  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.535393ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.590278  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.560485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.608466  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.193383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.608735  108888 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0516 00:39:44.627471  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.221788ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.629315  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.332655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.648224  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.908933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.648575  108888 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0516 00:39:44.667686  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.222938ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.668135  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.668160  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.668300  108888 wrap.go:47] GET /healthz: (1.245044ms) 500
goroutine 37226 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003ca51f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003ca51f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c331e60, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc01620bee8, 0xc002757b80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc01620bee8, 0xc00a128c00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc01620bee8, 0xc00a128c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc01620bee8, 0xc00a128c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc01620bee8, 0xc00a128c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc01620bee8, 0xc00a128c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc01620bee8, 0xc00a128c00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc01620bee8, 0xc00a128c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc01620bee8, 0xc00a128c00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc01620bee8, 0xc00a128c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc01620bee8, 0xc00a128c00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc01620bee8, 0xc00a128c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc01620bee8, 0xc00a128b00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc01620bee8, 0xc00a128b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cf353e0, 0xc017fecc60, 0x73aefc0, 0xc01620bee8, 0xc00a128b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52582]
I0516 00:39:44.672789  108888 wrap.go:47] GET /api/v1/namespaces/kube-public: (4.253098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.677821  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.677852  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.678027  108888 wrap.go:47] GET /healthz: (1.049548ms) 500
goroutine 37199 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003ee6e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003ee6e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c348b40, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc0167dda50, 0xc00423de00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc0167dda50, 0xc008cb0700)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc0167dda50, 0xc008cb0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc0167dda50, 0xc008cb0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc0167dda50, 0xc008cb0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc0167dda50, 0xc008cb0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc0167dda50, 0xc008cb0700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc0167dda50, 0xc008cb0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc0167dda50, 0xc008cb0700)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc0167dda50, 0xc008cb0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc0167dda50, 0xc008cb0700)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc0167dda50, 0xc008cb0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc0167dda50, 0xc008cb0600)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc0167dda50, 0xc008cb0600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ce4c8a0, 0xc017fecc60, 0x73aefc0, 0xc0167dda50, 0xc008cb0600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.688086  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.811028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.688318  108888 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0516 00:39:44.707541  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.245224ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.709417  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.312506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.728663  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.176606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.728943  108888 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0516 00:39:44.754073  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (7.706437ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.756071  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.4862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.769718  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.769748  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.769938  108888 wrap.go:47] GET /healthz: (2.051573ms) 500
goroutine 37237 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0040264d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0040264d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c381e80, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc016145630, 0xc0039c2dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc016145630, 0xc00b8ca800)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc016145630, 0xc00b8ca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc016145630, 0xc00b8ca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc016145630, 0xc00b8ca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc016145630, 0xc00b8ca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc016145630, 0xc00b8ca800)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc016145630, 0xc00b8ca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc016145630, 0xc00b8ca800)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc016145630, 0xc00b8ca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc016145630, 0xc00b8ca800)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc016145630, 0xc00b8ca800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc016145630, 0xc00b8ca700)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc016145630, 0xc00b8ca700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00cf36d80, 0xc017fecc60, 0x73aefc0, 0xc016145630, 0xc00b8ca700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:44.770266  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.313435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.770726  108888 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0516 00:39:44.777850  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.777882  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.778116  108888 wrap.go:47] GET /healthz: (1.18089ms) 500
goroutine 37283 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0030e8cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0030e8cb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00be70700, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc00c45da40, 0xc0039c3400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc00c45da40, 0xc00562b400)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc00c45da40, 0xc00562b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc00c45da40, 0xc00562b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc00c45da40, 0xc00562b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc00c45da40, 0xc00562b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc00c45da40, 0xc00562b400)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc00c45da40, 0xc00562b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc00c45da40, 0xc00562b400)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc00c45da40, 0xc00562b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc00c45da40, 0xc00562b400)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc00c45da40, 0xc00562b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc00c45da40, 0xc00562b300)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc00c45da40, 0xc00562b300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b0cd0e0, 0xc017fecc60, 0x73aefc0, 0xc00c45da40, 0xc00562b300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.787775  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.420217ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.789784  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.470881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.809138  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.84475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.810263  108888 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0516 00:39:44.828095  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.762075ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.830214  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.589595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.848764  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.457356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.849013  108888 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0516 00:39:44.869895  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (2.844256ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.872502  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.129429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.873316  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.873339  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.873546  108888 wrap.go:47] GET /healthz: (4.262081ms) 500
goroutine 37273 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003e71880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003e71880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c230b20, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc011399a28, 0xc0011b2500, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc011399a28, 0xc00bbc5200)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc011399a28, 0xc00bbc5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc011399a28, 0xc00bbc5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc011399a28, 0xc00bbc5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc011399a28, 0xc00bbc5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc011399a28, 0xc00bbc5200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc011399a28, 0xc00bbc5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc011399a28, 0xc00bbc5200)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc011399a28, 0xc00bbc5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc011399a28, 0xc00bbc5200)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc011399a28, 0xc00bbc5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc011399a28, 0xc00bbc5100)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc011399a28, 0xc00bbc5100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d166fc0, 0xc017fecc60, 0x73aefc0, 0xc011399a28, 0xc00bbc5100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52408]
I0516 00:39:44.879572  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.879607  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.879796  108888 wrap.go:47] GET /healthz: (2.672579ms) 500
goroutine 37315 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc003ee7960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc003ee7960, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c12f1a0, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc0167ddc68, 0xc0024ee780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe500)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe500)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe500)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe500)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe400)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc0167ddc68, 0xc00ccbe400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d0a50e0, 0xc017fecc60, 0x73aefc0, 0xc0167ddc68, 0xc00ccbe400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.888675  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.308077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.889032  108888 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0516 00:39:44.907260  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (971.952µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.909223  108888 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.344556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.928639  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.301006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.928991  108888 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0516 00:39:44.947825  108888 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.54677ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.950054  108888 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.717428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.969821  108888 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.538936ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:44.970058  108888 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0516 00:39:44.970082  108888 healthz.go:184] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0516 00:39:44.970090  108888 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0516 00:39:44.970245  108888 wrap.go:47] GET /healthz: (3.011282ms) 500
goroutine 37280 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0051fa4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0051fa4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00c066760, 0x1f4)
net/http.Error(0x7f264e115d00, 0xc011399c00, 0xc0101cd540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f264e115d00, 0xc011399c00, 0xc00ccd5d00)
net/http.HandlerFunc.ServeHTTP(0xc0110b4d80, 0x7f264e115d00, 0xc011399c00, 0xc00ccd5d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc010545040, 0x7f264e115d00, 0xc011399c00, 0xc00ccd5d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc017fe3110, 0x7f264e115d00, 0xc011399c00, 0xc00ccd5d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x438a649, 0xe, 0xc017e170e0, 0xc017fe3110, 0x7f264e115d00, 0xc011399c00, 0xc00ccd5d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f264e115d00, 0xc011399c00, 0xc00ccd5d00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cac0, 0x7f264e115d00, 0xc011399c00, 0xc00ccd5d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f264e115d00, 0xc011399c00, 0xc00ccd5d00)
net/http.HandlerFunc.ServeHTTP(0xc017ff0810, 0x7f264e115d00, 0xc011399c00, 0xc00ccd5d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f264e115d00, 0xc011399c00, 0xc00ccd5d00)
net/http.HandlerFunc.ServeHTTP(0xc014a4cb00, 0x7f264e115d00, 0xc011399c00, 0xc00ccd5d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f264e115d00, 0xc011399c00, 0xc00ccd5c00)
net/http.HandlerFunc.ServeHTTP(0xc017eb9860, 0x7f264e115d00, 0xc011399c00, 0xc00ccd5c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d6591a0, 0xc017fecc60, 0x73aefc0, 0xc011399c00, 0xc00ccd5c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:52582]
I0516 00:39:44.980715  108888 wrap.go:47] GET /healthz: (1.245361ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.982495  108888 wrap.go:47] GET /api/v1/namespaces/default: (1.283378ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.985004  108888 wrap.go:47] POST /api/v1/namespaces: (2.054907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.986745  108888 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.412402ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.991651  108888 wrap.go:47] POST /api/v1/namespaces/default/services: (4.397374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.993559  108888 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.079792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:44.995775  108888 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.851921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:45.068867  108888 wrap.go:47] GET /healthz: (1.940754ms) 200 [Go-http-client/1.1 127.0.0.1:52582]
W0516 00:39:45.069654  108888 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 00:39:45.069700  108888 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 00:39:45.069722  108888 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 00:39:45.069733  108888 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 00:39:45.069743  108888 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 00:39:45.069753  108888 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 00:39:45.069764  108888 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 00:39:45.069775  108888 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 00:39:45.069795  108888 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0516 00:39:45.069806  108888 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0516 00:39:45.069868  108888 factory.go:337] Creating scheduler from algorithm provider 'DefaultProvider'
I0516 00:39:45.069878  108888 factory.go:418] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0516 00:39:45.070085  108888 controller_utils.go:1029] Waiting for caches to sync for scheduler controller
I0516 00:39:45.070337  108888 reflector.go:122] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:209
I0516 00:39:45.070358  108888 reflector.go:160] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:209
I0516 00:39:45.071313  108888 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (683.082µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52582]
I0516 00:39:45.072259  108888 get.go:250] Starting watch for /api/v1/pods, rv=23998 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m38s
I0516 00:39:45.170285  108888 shared_informer.go:176] caches populated
I0516 00:39:45.170369  108888 controller_utils.go:1036] Caches are synced for scheduler controller
I0516 00:39:45.170783  108888 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.170811  108888 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.170817  108888 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.170836  108888 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.170842  108888 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.170866  108888 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.171197  108888 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.171211  108888 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.171215  108888 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.171227  108888 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.171618  108888 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.171632  108888 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.171665  108888 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.171679  108888 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.171712  108888 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.171742  108888 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.171941  108888 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.171953  108888 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0516 00:39:45.173019  108888 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (444.399µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52940]
I0516 00:39:45.173022  108888 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (573.162µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:45.173129  108888 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (563.43µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52952]
I0516 00:39:45.173737  108888 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (505.567µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52946]
I0516 00:39:45.173743  108888 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (319.863µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52408]
I0516 00:39:45.173803  108888 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (710.277µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52938]
I0516 00:39:45.174039  108888 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=24004 labels= fields= timeout=5m16s
I0516 00:39:45.174215  108888 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (385.301µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52948]
I0516 00:39:45.174692  108888 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (376.929µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52950]
I0516 00:39:45.174995  108888 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=23998 labels= fields= timeout=5m45s
I0516 00:39:45.175081  108888 get.go:250] Starting watch for /api/v1/services, rv=24211 labels= fields= timeout=6m35s
I0516 00:39:45.175416  108888 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=23998 labels= fields= timeout=7m59s
I0516 00:39:45.175746  108888 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=23998 labels= fields= timeout=8m56s
I0516 00:39:45.175863  108888 get.go:250] Starting watch for /api/v1/nodes, rv=23998 labels= fields= timeout=8m37s
I0516 00:39:45.175863  108888 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=24004 labels= fields= timeout=9m54s
I0516 00:39:45.176112  108888 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=24003 labels= fields= timeout=6m35s
I0516 00:39:45.176582  108888 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (1.780964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52954]
I0516 00:39:45.177267  108888 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=24004 labels= fields= timeout=9m27s
I0516 00:39:45.270748  108888 shared_informer.go:176] caches populated
I0516 00:39:45.370908  108888 shared_informer.go:176] caches populated
I0516 00:39:45.471137  108888 shared_informer.go:176] caches populated
I0516 00:39:45.571401  108888 shared_informer.go:176] caches populated
I0516 00:39:45.671656  108888 shared_informer.go:176] caches populated
I0516 00:39:45.771863  108888 shared_informer.go:176] caches populated
I0516 00:39:45.872129  108888 shared_informer.go:176] caches populated
I0516 00:39:45.972338  108888 shared_informer.go:176] caches populated
I0516 00:39:46.072605  108888 shared_informer.go:176] caches populated
I0516 00:39:46.172815  108888 shared_informer.go:176] caches populated
I0516 00:39:46.174852  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:46.174970  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:46.175737  108888 wrap.go:47] POST /api/v1/nodes: (2.420583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.176149  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:46.176238  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:46.177113  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:46.179171  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.785135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.179613  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0
I0516 00:39:46.179625  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0
I0516 00:39:46.179756  108888 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0", node "node1"
I0516 00:39:46.179769  108888 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0", node "node1": all PVCs bound and nothing to do
I0516 00:39:46.179817  108888 factory.go:711] Attempting to bind rpod-0 to node1
I0516 00:39:46.181378  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.682028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.181733  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1
I0516 00:39:46.181748  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1
I0516 00:39:46.181858  108888 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1", node "node1"
I0516 00:39:46.181878  108888 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1", node "node1": all PVCs bound and nothing to do
I0516 00:39:46.181949  108888 factory.go:711] Attempting to bind rpod-1 to node1
I0516 00:39:46.182121  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0/binding: (1.791181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53110]
I0516 00:39:46.182337  108888 scheduler.go:589] pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0516 00:39:46.183648  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-1/binding: (1.384845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.183838  108888 scheduler.go:589] pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0516 00:39:46.184177  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.539529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53110]
I0516 00:39:46.185992  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.395549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53110]
I0516 00:39:46.284090  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0: (1.970683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53110]
I0516 00:39:46.386950  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-1: (1.802993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53110]
I0516 00:39:46.387321  108888 preemption_test.go:561] Creating the preemptor pod...
I0516 00:39:46.391118  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.493227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53110]
I0516 00:39:46.391381  108888 preemption_test.go:567] Creating additional pods...
I0516 00:39:46.391628  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod
I0516 00:39:46.391651  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod
I0516 00:39:46.391883  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.391989  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.397284  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (5.458556ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53110]
I0516 00:39:46.399580  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (2.831325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53114]
I0516 00:39:46.400235  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod/status: (3.575194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.400566  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.663379ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53116]
I0516 00:39:46.401114  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.309505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53110]
I0516 00:39:46.401951  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.319872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.402216  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0516 00:39:46.402314  108888 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0516 00:39:46.402324  108888 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0516 00:39:46.403225  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.716054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53116]
I0516 00:39:46.405652  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod/status: (2.61752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.405798  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.139991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53116]
I0516 00:39:46.407692  108888 cacher.go:739] cacher (*core.Pod): 1 objects queued in incoming channel.
I0516 00:39:46.408725  108888 cacher.go:739] cacher (*core.Pod): 2 objects queued in incoming channel.
I0516 00:39:46.412191  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0: (6.1444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.412471  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:46.412487  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:46.412624  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.412662  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.414561  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.923383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.416320  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0/status: (2.611068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53118]
I0516 00:39:46.416651  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (3.291477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53120]
I0516 00:39:46.418372  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (1.069455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53118]
I0516 00:39:46.418621  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.418802  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:46.418813  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:46.418898  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.418956  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.421686  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (15.512111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53114]
I0516 00:39:46.431322  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (8.937858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53114]
I0516 00:39:46.432771  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (17.781737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.436685  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (17.411343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53120]
I0516 00:39:46.438999  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1/status: (19.307278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53118]
I0516 00:39:46.447304  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (12.804604ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.448389  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (8.766031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53118]
I0516 00:39:46.448716  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (15.140851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53114]
I0516 00:39:46.449071  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.451199  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:46.451278  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:46.451467  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.451550  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.453639  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.488207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.454961  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2/status: (3.135484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53120]
I0516 00:39:46.457226  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.648382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53122]
I0516 00:39:46.457630  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (3.325648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52962]
I0516 00:39:46.457832  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.159788ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0516 00:39:46.460443  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.884187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0516 00:39:46.462782  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (2.786683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53120]
I0516 00:39:46.463042  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.463212  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:46.463241  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:46.463326  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.463374  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.467602  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (5.835999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0516 00:39:46.468228  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (4.283511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53124]
I0516 00:39:46.468935  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.874968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53128]
I0516 00:39:46.469470  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3/status: (5.512603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53120]
I0516 00:39:46.471361  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (1.452094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53128]
I0516 00:39:46.471616  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.950793ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0516 00:39:46.472124  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.474202  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.855686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53126]
I0516 00:39:46.474435  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:46.474466  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:46.474605  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.474660  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.482721  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (8.045628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53128]
I0516 00:39:46.484480  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.403986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53132]
I0516 00:39:46.484602  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (2.104712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53130]
I0516 00:39:46.484995  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.478349ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53128]
I0516 00:39:46.485196  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4/status: (2.852727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53124]
I0516 00:39:46.487408  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (1.211138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53124]
I0516 00:39:46.487600  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.643851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53130]
I0516 00:39:46.487650  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.487827  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:46.487843  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:46.488022  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.488077  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.489701  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (1.148493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53134]
I0516 00:39:46.490231  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.0084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53124]
I0516 00:39:46.492135  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.532934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53124]
I0516 00:39:46.492424  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5/status: (3.953354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53132]
I0516 00:39:46.492685  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.066068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53136]
I0516 00:39:46.494203  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (1.224304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53132]
I0516 00:39:46.494418  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.494597  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:46.494618  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:46.494720  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.494768  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.495154  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.376356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53124]
I0516 00:39:46.496749  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (1.780065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53132]
I0516 00:39:46.496845  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (1.702399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53134]
I0516 00:39:46.497438  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-1.159f02e6ce3742c5: (1.837238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53124]
I0516 00:39:46.497494  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.497758  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:46.497777  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:46.497869  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.497960  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.500080  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.512168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0516 00:39:46.500712  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6/status: (2.242383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53132]
I0516 00:39:46.501118  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (2.918988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53134]
I0516 00:39:46.501767  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.819301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53138]
I0516 00:39:46.502740  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (1.127245ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53132]
I0516 00:39:46.503078  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.503269  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:46.503289  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:46.503423  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.503585  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.505428  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7/status: (1.555798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0516 00:39:46.505545  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (1.744748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53132]
I0516 00:39:46.505751  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.352185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53138]
I0516 00:39:46.506209  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.951926ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53142]
I0516 00:39:46.507002  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (903.06µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0516 00:39:46.507227  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.507420  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:46.507439  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:46.507528  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.507766  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.286748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53132]
I0516 00:39:46.508127  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.508774  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (935.959µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0516 00:39:46.510277  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.317657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53144]
I0516 00:39:46.510664  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8/status: (1.700529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53142]
I0516 00:39:46.510820  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.569678ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53132]
I0516 00:39:46.512018  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (986.528µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53144]
I0516 00:39:46.512322  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.512560  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:46.512579  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:46.512806  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.43131ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53132]
I0516 00:39:46.513030  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.513076  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.514234  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (908.507µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0516 00:39:46.514707  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.432933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53144]
I0516 00:39:46.515365  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.722078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53146]
I0516 00:39:46.515726  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9/status: (2.041181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.517612  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (1.461651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.517676  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.783496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53144]
I0516 00:39:46.517862  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.518068  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:46.518099  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:46.518180  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.518274  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.520073  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (1.311742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0516 00:39:46.521116  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.922915ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.521162  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.156231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53150]
I0516 00:39:46.521380  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10/status: (1.569643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53152]
I0516 00:39:46.522856  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.392955ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.524322  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (2.255876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53152]
I0516 00:39:46.524619  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.524825  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:46.524850  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:46.524997  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.525064  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.525424  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.776775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.527027  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.540553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0516 00:39:46.527783  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.806428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.527786  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11/status: (2.277616ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53152]
I0516 00:39:46.529820  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.638065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.530885  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (2.275228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
E0516 00:39:46.531131  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.548637  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (18.454937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.550354  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (22.084192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53154]
I0516 00:39:46.551504  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.551755  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:46.551798  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:46.551952  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.552187  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.554465  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (1.457257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0516 00:39:46.556291  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12/status: (3.728103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.556734  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.746137ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0516 00:39:46.559120  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (2.329344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.559491  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.559679  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:46.559729  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:46.559937  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.559984  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.561519  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (1.096325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0516 00:39:46.562521  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13/status: (2.301121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.562812  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.036386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53184]
I0516 00:39:46.564606  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (5.920589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0516 00:39:46.565351  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (2.073336ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.565681  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.566892  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:46.566953  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:46.567116  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.567242  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.577834  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (5.586218ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53188]
I0516 00:39:46.578183  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (5.978081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0516 00:39:46.578631  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14/status: (6.060111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53148]
I0516 00:39:46.583872  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (8.739813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0516 00:39:46.584037  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.697925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0516 00:39:46.584252  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (4.769174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53194]
I0516 00:39:46.584604  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.584791  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:46.584813  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:46.584985  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.585028  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.628943  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (6.063154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53196]
I0516 00:39:46.631301  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (8.973761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53198]
I0516 00:39:46.631805  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15/status: (46.481324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53140]
I0516 00:39:46.632935  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (48.270964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0516 00:39:46.637384  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.915049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0516 00:39:46.651775  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (13.744474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
E0516 00:39:46.652786  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.654968  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (1.350264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53198]
I0516 00:39:46.655218  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.373644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0516 00:39:46.661447  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.662451  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:46.662476  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:46.662650  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.662704  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.685384  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (21.452829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53202]
I0516 00:39:46.686046  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16/status: (22.514694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53196]
I0516 00:39:46.686348  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (22.741661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53200]
I0516 00:39:46.686617  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (25.16691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
E0516 00:39:46.687170  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.690688  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (3.156333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53200]
I0516 00:39:46.693373  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.694108  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (6.336352ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53182]
I0516 00:39:46.695484  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:46.695503  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:46.695632  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.695673  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.708114  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (11.616994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53210]
I0516 00:39:46.708790  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (11.568837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53208]
I0516 00:39:46.709323  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17/status: (12.829981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53202]
I0516 00:39:46.715488  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (5.705267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53210]
I0516 00:39:46.715955  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.716011  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (21.38765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53200]
I0516 00:39:46.716519  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:46.716548  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:46.716658  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.716697  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.725147  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18/status: (3.420972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53208]
I0516 00:39:46.725342  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (5.100574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53214]
I0516 00:39:46.725639  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (4.354054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0516 00:39:46.729212  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (3.239658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53208]
I0516 00:39:46.729614  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.729838  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:46.729850  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:46.729958  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.730001  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.736878  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (12.84903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53210]
I0516 00:39:46.737807  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (6.321224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53214]
I0516 00:39:46.738795  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19/status: (7.224203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0516 00:39:46.743870  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (13.012267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0516 00:39:46.746632  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (2.60591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0516 00:39:46.746964  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.335695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53214]
I0516 00:39:46.747186  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.747539  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:46.747591  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:46.747758  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.747829  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.751817  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (3.035345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53218]
I0516 00:39:46.752399  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.979772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0516 00:39:46.752713  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.310177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0516 00:39:46.754958  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20/status: (4.174652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53220]
I0516 00:39:46.757876  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.773647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53218]
I0516 00:39:46.758154  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.758366  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:46.758380  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:46.758484  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.758540  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.760807  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (7.068284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0516 00:39:46.765007  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.72249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0516 00:39:46.765036  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21/status: (6.244166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53218]
I0516 00:39:46.765337  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (5.73121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0516 00:39:46.765848  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (6.538262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53222]
E0516 00:39:46.766572  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.783327  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (16.863477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0516 00:39:46.783699  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (16.874753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53218]
I0516 00:39:46.784367  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.785828  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:46.785857  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:46.786018  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.786066  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.792888  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (8.303046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53222]
I0516 00:39:46.793292  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (6.338961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0516 00:39:46.793569  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22/status: (6.186359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0516 00:39:46.793956  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (7.041835ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53234]
E0516 00:39:46.795436  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.797662  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.584614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0516 00:39:46.798357  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.798760  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:46.798781  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:46.798950  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.799000  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.802083  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.688004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53234]
I0516 00:39:46.802626  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (6.833195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0516 00:39:46.802747  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.334456ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53236]
I0516 00:39:46.804786  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23/status: (4.016448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53212]
I0516 00:39:46.807094  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.635217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0516 00:39:46.807573  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.807760  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:46.807785  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:46.807891  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.807984  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.813722  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (5.020403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53234]
I0516 00:39:46.813782  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24/status: (5.051484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0516 00:39:46.816088  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.921445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0516 00:39:46.816495  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (1.166833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53216]
I0516 00:39:46.816781  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.817000  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:46.817058  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:46.817224  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.817309  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.820476  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (1.558971ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53238]
I0516 00:39:46.821110  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25/status: (3.329042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53234]
I0516 00:39:46.821238  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.179671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53240]
I0516 00:39:46.822675  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (959.89µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53234]
I0516 00:39:46.823140  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.823338  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:46.823356  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:46.823478  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.823525  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.825985  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (2.188362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53240]
I0516 00:39:46.826185  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.351144ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
I0516 00:39:46.826259  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26/status: (2.501406ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53234]
E0516 00:39:46.826349  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.828639  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (2.003969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
I0516 00:39:46.828886  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.829098  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:46.829153  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:46.829301  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.829364  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.832161  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27/status: (2.447196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
I0516 00:39:46.833513  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.813842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53244]
I0516 00:39:46.833548  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (954.954µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
I0516 00:39:46.833811  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (2.175545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53240]
I0516 00:39:46.834369  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0516 00:39:46.834634  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.834648  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:46.834665  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:46.834764  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.834803  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.836399  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (1.025508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
I0516 00:39:46.836964  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.456849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53246]
I0516 00:39:46.837134  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28/status: (1.653402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53244]
I0516 00:39:46.838994  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (1.421431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53244]
I0516 00:39:46.839389  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.839665  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:46.839684  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:46.839781  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.839829  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.841385  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (1.208388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
I0516 00:39:46.842118  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29/status: (1.994219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53246]
I0516 00:39:46.844334  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.715277ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53248]
I0516 00:39:46.844722  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (2.134354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53246]
I0516 00:39:46.845086  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.845304  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:46.845327  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:46.845456  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.845565  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.847657  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (1.158674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
I0516 00:39:46.847972  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (1.473272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53248]
I0516 00:39:46.848290  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.848472  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:46.848490  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:46.848605  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.848695  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.849423  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-11.159f02e6d48a25bb: (2.345908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53250]
I0516 00:39:46.851130  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30/status: (2.116705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53248]
I0516 00:39:46.856119  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (4.593228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53248]
I0516 00:39:46.856886  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (4.860639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
E0516 00:39:46.857155  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.857191  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (7.330387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53250]
I0516 00:39:46.857768  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.858144  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:46.858191  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:46.858325  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.858404  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.860895  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (1.66571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
I0516 00:39:46.860895  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31/status: (2.155259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53250]
E0516 00:39:46.861302  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.862030  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.647822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53252]
I0516 00:39:46.863968  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (2.632001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53242]
I0516 00:39:46.865132  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.865334  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:46.865357  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:46.865468  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.865516  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.867967  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (1.756362ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53250]
I0516 00:39:46.868562  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32/status: (2.321606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53252]
I0516 00:39:46.868567  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.927282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53254]
I0516 00:39:46.870031  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (1.108755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53252]
I0516 00:39:46.870264  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.870416  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:46.870448  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:46.870546  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.870608  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.872269  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.276499ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53250]
I0516 00:39:46.872987  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33/status: (2.153443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53252]
I0516 00:39:46.873038  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.713414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53256]
I0516 00:39:46.875751  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (2.436138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53252]
I0516 00:39:46.876073  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.876341  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:46.876391  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:46.876576  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.876626  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.880819  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.633036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0516 00:39:46.881502  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (3.35393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53250]
I0516 00:39:46.884382  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34/status: (7.151882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53252]
I0516 00:39:46.886133  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (1.137773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53250]
I0516 00:39:46.886393  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.886598  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:46.886614  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:46.886744  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.886791  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.888615  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (1.083671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0516 00:39:46.890933  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.962562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0516 00:39:46.891101  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35/status: (3.096085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53250]
I0516 00:39:46.897664  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (2.581936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0516 00:39:46.898014  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.898223  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:46.898282  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:46.898416  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.898489  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.900745  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (1.689151ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0516 00:39:46.900845  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (1.801645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.901002  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.901076  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-15.159f02e6d81d624a: (1.832462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53262]
I0516 00:39:46.901159  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:46.901171  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:46.901246  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.901279  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.902882  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (1.414919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0516 00:39:46.903153  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36/status: (1.684925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.903607  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.847156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0516 00:39:46.905256  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (1.528383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53266]
I0516 00:39:46.905636  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.905506  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.970448ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53258]
I0516 00:39:46.905863  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:46.905907  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:46.906037  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.906081  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.908263  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37/status: (1.899506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0516 00:39:46.908602  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (2.290411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
E0516 00:39:46.908796  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.909302  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.606265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53268]
I0516 00:39:46.909575  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (906.729µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53264]
I0516 00:39:46.909869  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.910045  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:46.910082  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:46.910212  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.910271  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.913245  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.409961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53270]
I0516 00:39:46.913314  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (2.719764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.913602  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38/status: (3.090413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53268]
I0516 00:39:46.919097  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (4.270326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.919457  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.919713  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39
I0516 00:39:46.919753  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39
I0516 00:39:46.919948  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.920017  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.923268  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39/status: (2.955964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.923291  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.608185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53272]
I0516 00:39:46.923325  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (2.983657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53270]
I0516 00:39:46.925293  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (1.523822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.934398  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.934634  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:46.934672  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:46.934819  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.934895  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.937071  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (1.661625ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53270]
I0516 00:39:46.937474  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (2.300851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.937842  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.937950  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-16.159f02e6dcbe8124: (2.139917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53274]
I0516 00:39:46.938004  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:46.938018  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:46.938119  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.938162  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.940280  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40/status: (1.823347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.940312  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.552741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53276]
I0516 00:39:46.940873  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (2.481226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53270]
E0516 00:39:46.941084  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.941783  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (1.122319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53276]
I0516 00:39:46.942073  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.942326  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:46.942355  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:46.942502  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.942563  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.944718  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.722715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.945677  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41/status: (2.827588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53270]
I0516 00:39:46.948297  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (2.116083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53270]
I0516 00:39:46.948582  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.948794  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:46.948812  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:46.948907  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.948961  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.951502  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.70062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53280]
I0516 00:39:46.952259  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (2.451658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53278]
I0516 00:39:46.952384  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42/status: (3.180636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53270]
E0516 00:39:46.952486  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.953806  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (919.84µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53278]
I0516 00:39:46.954071  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.954297  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43
I0516 00:39:46.954335  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43
I0516 00:39:46.954457  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.954544  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.955943  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (2.47175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
E0516 00:39:46.956267  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:46.956357  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (1.059659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53278]
I0516 00:39:46.956830  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43/status: (1.542922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53280]
I0516 00:39:46.958630  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (1.342815ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53278]
I0516 00:39:46.958718  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.554989ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53282]
I0516 00:39:46.958868  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.959051  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44
I0516 00:39:46.959074  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44
I0516 00:39:46.959192  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.959243  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.961504  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.640907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53284]
I0516 00:39:46.961524  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (1.749466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.961702  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44/status: (1.873991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53278]
I0516 00:39:46.963383  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (1.272292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.963643  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.964002  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45
I0516 00:39:46.964025  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45
I0516 00:39:46.964177  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.964237  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.967663  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (3.153657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53284]
I0516 00:39:46.967781  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45/status: (3.30729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.968172  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.259291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53286]
I0516 00:39:46.969494  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (1.180185ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53260]
I0516 00:39:46.969791  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.970039  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:46.970059  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:46.970149  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.970196  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.971946  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (1.251556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53284]
I0516 00:39:46.972426  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.622694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:46.972499  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46/status: (1.730928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53286]
I0516 00:39:46.973968  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (995.389µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:46.974319  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.974556  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:46.974575  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:46.974681  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.974725  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.976075  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (1.089557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53284]
I0516 00:39:46.976105  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (1.211711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:46.976334  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.976541  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47
I0516 00:39:46.976570  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47
I0516 00:39:46.976709  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.976754  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.977241  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-21.159f02e6e274bbdd: (1.76203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53290]
I0516 00:39:46.978106  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (1.110771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:46.981892  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.072933ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53290]
I0516 00:39:46.982151  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47/status: (5.104746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53284]
I0516 00:39:46.984251  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (1.554062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53290]
I0516 00:39:46.984564  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.984744  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:46.984789  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:46.984943  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.984997  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.986523  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.148414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53290]
I0516 00:39:46.986703  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.403802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:46.987146  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.987303  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48
I0516 00:39:46.987319  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48
I0516 00:39:46.987401  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.987442  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.987457  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-22.159f02e6e418dec0: (1.745857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53292]
I0516 00:39:46.989240  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.383321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:46.989244  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (1.509736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53292]
I0516 00:39:46.990484  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48/status: (2.001585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53290]
I0516 00:39:46.992152  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (1.071549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53292]
I0516 00:39:46.992417  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.992595  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:46.992614  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:46.992699  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.992740  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.995214  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.999558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:46.995275  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49/status: (2.276428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53292]
I0516 00:39:46.995283  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (1.445322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:46.996619  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (921.582µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:46.996836  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.997002  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:46.997019  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:46.997098  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.997143  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:46.998398  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (1.048248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:46.998474  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (915.788µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:46.998843  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:46.999027  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:46.999045  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:46.999149  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:46.999194  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:47.000156  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-26.159f02e6e65481e1: (2.001094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53296]
I0516 00:39:47.000568  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (1.17944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:47.000645  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (1.195054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:47.000785  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:47.000946  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:47.000965  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:47.001055  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:47.001100  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:47.002180  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (922.081µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:47.002430  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (1.154191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:47.002480  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:47.002482  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-27.159f02e6e6ad8dd2: (1.762712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53296]
I0516 00:39:47.002764  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:47.002786  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:47.002901  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:47.002967  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:47.004201  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (1.076921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:47.004391  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:47.004559  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:47.004603  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:47.004686  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (1.342061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53298]
I0516 00:39:47.004771  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:47.004834  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-30.159f02e6e7d480c4: (1.781354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53296]
I0516 00:39:47.004845  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:47.006147  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (1.070886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:47.006258  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (1.185134ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:47.006376  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:47.006619  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:47.006642  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:47.006735  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:47.006778  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:47.007454  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.022746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:47.007942  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (975.325µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.008135  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (1.113824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53288]
I0516 00:39:47.008179  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:47.008310  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:47.008324  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:47.008436  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:47.008477  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:47.011841  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (3.012914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:47.011875  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (3.224918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.011963  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-31.159f02e6e868b4c5: (6.402898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53300]
I0516 00:39:47.012108  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:47.012653  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:47.012669  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:47.012771  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:47.012810  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:47.014418  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (1.318986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:47.014490  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (1.372631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.014719  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:47.014829  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-37.159f02e6eb402d6d: (1.479081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53304]
I0516 00:39:47.016846  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-40.159f02e6ed29ae51: (1.507289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.019071  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-41.159f02e6ed6ce44f: (1.520451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.021301  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-42.159f02e6edce85d4: (1.68032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.108317  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.794879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.175053  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:47.175062  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:47.176300  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:47.176397  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:47.177309  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:47.208589  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (2.014916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.308605  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.949574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.410080  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.674066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.512387  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (3.760346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.608590  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (2.014958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.708299  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.768799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.814004  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.954728ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:47.908507  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.94425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:48.008544  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.973603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:48.071467  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod
I0516 00:39:48.071506  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod
I0516 00:39:48.071715  108888 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod", node "node1"
I0516 00:39:48.071735  108888 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0516 00:39:48.071790  108888 factory.go:711] Attempting to bind preemptor-pod to node1
I0516 00:39:48.071836  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:48.071866  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:48.072039  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.072097  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.073939  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (1.432965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.073941  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (1.328857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.074182  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod/binding: (2.044725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53302]
I0516 00:39:48.074331  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.074399  108888 scheduler.go:589] pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0516 00:39:48.074475  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-0.159f02e6cdd73f08: (1.64921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53342]
I0516 00:39:48.074516  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:48.074541  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:48.074640  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.074672  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.076328  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (1.456486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.076583  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (1.757181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.076763  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.581123ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53344]
I0516 00:39:48.076772  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.076903  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:48.076932  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:48.077000  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.077037  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.078136  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (955.628µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.078145  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (914.209µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.078515  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.078670  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:48.078686  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:48.078794  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.078830  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.079038  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-2.159f02e6d028670d: (1.509646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53346]
I0516 00:39:48.080179  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (1.037544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.080179  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (1.136865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.080413  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.080574  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:48.080593  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:48.080693  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.080735  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.081875  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-3.159f02e6d0dd0ba8: (1.903343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53346]
I0516 00:39:48.082526  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (1.137563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.082568  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (1.192457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.082736  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.082943  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:48.082962  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:48.083047  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.083099  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.084168  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (891.61µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.084381  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.084450  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-4.159f02e6d1893aff: (1.452927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.084500  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:48.084506  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (1.180871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53346]
I0516 00:39:48.084514  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:48.084601  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.084659  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.085909  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (1.002073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.086146  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.086167  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (1.250081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.086506  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:48.086520  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:48.086630  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.086674  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.086765  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-5.159f02e6d255e13a: (1.554924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53348]
I0516 00:39:48.088042  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (1.167484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.088303  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.088601  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (1.14236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53348]
I0516 00:39:48.088722  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:48.088740  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:48.088837  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.088901  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.088957  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-1.159f02e6ce3742c5: (1.527204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.090131  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (936.003µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.090133  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (1.020611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53348]
I0516 00:39:48.090581  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.090763  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:48.090782  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:48.090849  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.090889  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.091384  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-6.159f02e6d2ec9830: (1.750148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.093003  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (1.414559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.093241  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (2.088497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53348]
I0516 00:39:48.093262  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.093403  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:48.093421  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:48.093500  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.093549  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.094633  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-7.159f02e6d341e11d: (1.8778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.094699  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (970.38µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.094943  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (1.249611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53348]
I0516 00:39:48.096123  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.096309  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:48.096325  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:48.096408  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.096448  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.097666  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (980.696µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.097910  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (1.311619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.098762  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.099078  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:48.099134  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:48.099236  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.099279  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.100156  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-8.159f02e6d380c6ee: (2.08333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53350]
I0516 00:39:48.100965  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (1.523289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.100978  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (1.468264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.101183  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.101324  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:48.101348  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:48.101432  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.101469  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.102548  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-9.159f02e6d3d3701a: (1.792531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53350]
I0516 00:39:48.102568  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (901.833µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53294]
I0516 00:39:48.102707  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (1.082473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.102938  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.103118  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:48.103136  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:48.103223  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.103261  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.104702  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (1.070961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53350]
I0516 00:39:48.104988  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.105102  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-10.159f02e6d422a496: (1.458899ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.105151  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:48.105174  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:48.105111  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (1.404608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53352]
I0516 00:39:48.105327  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.105403  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.106737  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (1.107292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53350]
I0516 00:39:48.107784  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (2.198565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.108691  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.108757  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-12.159f02e6d625ed78: (2.727259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53354]
I0516 00:39:48.108902  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:48.109442  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:48.109569  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.109622  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.110213  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.654401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53356]
I0516 00:39:48.110800  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (960.229µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.110928  108888 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0516 00:39:48.111158  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.111333  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:48.111380  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:48.111432  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (1.219181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53358]
I0516 00:39:48.111493  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.111560  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.112055  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-13.159f02e6d69f34f8: (2.078866ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53350]
I0516 00:39:48.112201  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (1.051312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.113742  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.646831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53356]
I0516 00:39:48.113821  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.402422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53350]
I0516 00:39:48.114050  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.114222  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:48.114274  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:48.114366  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.114422  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.114670  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (1.753269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53358]
I0516 00:39:48.114693  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-14.159f02e6d70de08a: (1.999259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.115769  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.017726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0516 00:39:48.115784  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.156061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53356]
I0516 00:39:48.116134  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.116271  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:48.116287  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:48.116388  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.116434  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.117075  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (1.97509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53358]
I0516 00:39:48.117157  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-17.159f02e6deb59e0c: (1.662252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.117904  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (1.089623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0516 00:39:48.119366  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (2.545426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53356]
I0516 00:39:48.119655  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (1.981381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53358]
I0516 00:39:48.119658  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-18.159f02e6dff67015: (1.803067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53340]
I0516 00:39:48.119770  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.119893  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:48.119904  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:48.119997  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.120037  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.122178  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-19.159f02e6e0c17035: (1.660478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.122365  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (1.886623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0516 00:39:48.122631  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (2.492461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53356]
I0516 00:39:48.122737  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.122772  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (2.295109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53364]
I0516 00:39:48.122948  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:48.122968  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:48.123052  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.123114  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.124439  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-20.159f02e6e1d17124: (1.74433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0516 00:39:48.125577  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (2.01647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53356]
I0516 00:39:48.125986  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (2.413734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.125990  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (2.01413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53364]
I0516 00:39:48.126197  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.127049  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-23.159f02e6e4de40f4: (1.742757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0516 00:39:48.127150  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:48.127170  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:48.127175  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (1.168915ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53356]
I0516 00:39:48.127271  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.127309  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.129079  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (1.033083ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0516 00:39:48.129112  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (1.162013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.129420  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.129472  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (1.482072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0516 00:39:48.129636  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:48.129667  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:48.129760  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.129887  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.131027  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-24.159f02e6e56751f8: (1.784767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53356]
I0516 00:39:48.133235  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (1.276339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.133508  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (3.16964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0516 00:39:48.133749  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (3.446599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53366]
I0516 00:39:48.134401  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.134562  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:48.134581  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:48.134679  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.134723  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.135602  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (1.119111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0516 00:39:48.135887  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (985.29µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.136139  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.136215  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (955.569µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0516 00:39:48.136340  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:48.136357  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:48.136460  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.136512  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.137144  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (1.139072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0516 00:39:48.137271  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-25.159f02e6e5f5988c: (3.443075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53356]
I0516 00:39:48.138083  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (949.376µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0516 00:39:48.138481  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (1.040201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53360]
I0516 00:39:48.139227  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.470947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.139422  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.139593  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:48.139616  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:48.139694  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.139730  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.139858  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-28.159f02e6e7009baf: (2.016939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53370]
I0516 00:39:48.140795  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (920.059µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.141066  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (1.755489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0516 00:39:48.141119  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.141279  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:48.141298  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:48.141378  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (1.433557ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53356]
I0516 00:39:48.141381  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.141563  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.142218  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-29.159f02e6e74d4c51: (1.731319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53370]
I0516 00:39:48.143275  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (1.481384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.143373  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (1.335385ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53368]
I0516 00:39:48.143453  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.143542  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (1.480995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53356]
I0516 00:39:48.143753  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:48.143770  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:48.143847  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.143886  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.144714  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (954.261µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.145362  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (994.141µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53374]
I0516 00:39:48.145388  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (1.252564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53372]
I0516 00:39:48.145578  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.145693  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:48.145732  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:48.145816  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.145861  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.146760  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (1.559981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.146887  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (875.475µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53372]
I0516 00:39:48.147045  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-11.159f02e6d48a25bb: (3.650647ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53370]
I0516 00:39:48.147083  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.147146  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (1.012905ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53374]
I0516 00:39:48.147713  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:48.147739  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:48.147817  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.147854  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.149758  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-32.159f02e6e8d540d7: (2.187959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53374]
I0516 00:39:48.150153  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (2.150241ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.150217  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (2.825249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53372]
I0516 00:39:48.150395  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.150438  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (2.137306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53376]
I0516 00:39:48.150677  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39
I0516 00:39:48.150694  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39
I0516 00:39:48.150809  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.150849  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.151610  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (1.070901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53372]
I0516 00:39:48.152567  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (1.497326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.152645  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-33.159f02e6e922f19e: (2.313442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53374]
I0516 00:39:48.152736  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (1.703061ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53376]
I0516 00:39:48.152985  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.153147  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:48.153171  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:48.153251  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.153290  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.153956  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (1.374088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53372]
I0516 00:39:48.155210  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (1.512607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.155633  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-34.159f02e6e97eb8d7: (1.912752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53374]
I0516 00:39:48.155733  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (1.16381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53378]
I0516 00:39:48.155734  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (1.362984ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53372]
I0516 00:39:48.156604  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.156879  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43
I0516 00:39:48.156897  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43
I0516 00:39:48.156979  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.157017  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.157173  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.020637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.158707  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (1.339201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53382]
I0516 00:39:48.158725  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (1.306809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53384]
I0516 00:39:48.158938  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.159105  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-35.159f02e6ea19dec8: (2.512131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53380]
I0516 00:39:48.159113  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44
I0516 00:39:48.159129  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44
I0516 00:39:48.159223  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.159262  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.159419  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (1.017105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.160462  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (984.618µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53382]
I0516 00:39:48.160550  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (985.173µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53384]
I0516 00:39:48.161113  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.161209  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (999.212µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53386]
I0516 00:39:48.161259  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45
I0516 00:39:48.161270  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45
I0516 00:39:48.161338  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.161417  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.161652  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-15.159f02e6d81d624a: (1.738443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.162553  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (948.002µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53382]
I0516 00:39:48.162584  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (979.422µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53384]
I0516 00:39:48.162770  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.162899  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:48.162958  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:48.163054  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.163160  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.163663  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.618773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53386]
I0516 00:39:48.164257  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-36.159f02e6eaf6f572: (1.970263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.164310  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (966.879µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53384]
I0516 00:39:48.164566  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (1.13823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53382]
I0516 00:39:48.164596  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.164821  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:48.164838  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:48.164956  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.164998  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.165569  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (1.27297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53386]
I0516 00:39:48.166422  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-38.159f02e6eb800b43: (1.563715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53384]
I0516 00:39:48.169521  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (3.482545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53386]
I0516 00:39:48.169583  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-39.159f02e6ec14d647: (2.512489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53384]
I0516 00:39:48.169761  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (4.091001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53388]
I0516 00:39:48.170975  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (5.780165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.171322  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.171441  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (1.140026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53386]
I0516 00:39:48.171624  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47
I0516 00:39:48.171660  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47
I0516 00:39:48.171763  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.171820  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.173065  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-16.159f02e6dcbe8124: (2.624314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53384]
I0516 00:39:48.173114  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (1.074736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53386]
I0516 00:39:48.175327  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:48.175471  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (3.410476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.175625  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (3.311493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53390]
I0516 00:39:48.175872  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.176066  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:48.176096  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:48.176112  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:48.176367  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.176416  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.176578  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:48.176897  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-43.159f02e6ee2380b6: (1.882703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53384]
I0516 00:39:48.177050  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:48.178260  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:48.178400  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.709031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.178449  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.867135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53390]
I0516 00:39:48.179056  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-44.159f02e6ee6b6261: (1.640797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53384]
I0516 00:39:48.179116  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.179280  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48
I0516 00:39:48.178928  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (5.457171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53386]
I0516 00:39:48.179295  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48
I0516 00:39:48.179399  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.179439  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.181143  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (1.486893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.181572  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (1.987623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53390]
I0516 00:39:48.181788  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.181935  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:48.181957  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:48.182045  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.182094  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.182544  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (956.834µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.182876  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (1.937229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53392]
I0516 00:39:48.183392  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-45.159f02e6eeb796cd: (3.359467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53394]
I0516 00:39:48.183425  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (931.095µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53396]
I0516 00:39:48.183814  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (1.553286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53390]
I0516 00:39:48.183996  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (1.106036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53362]
I0516 00:39:48.184088  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.184251  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:48.184472  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:48.184618  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.184659  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.185325  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (957.45µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53392]
I0516 00:39:48.186369  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (1.316975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0516 00:39:48.186591  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-46.159f02e6ef12829b: (2.180069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53396]
I0516 00:39:48.187186  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.464651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53392]
I0516 00:39:48.187265  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (1.542464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53398]
I0516 00:39:48.187485  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.188083  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:48.188163  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:48.188259  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.188353  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.189481  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (1.877592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53396]
I0516 00:39:48.189731  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (940.808µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53402]
I0516 00:39:48.189880  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (796.198µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53404]
I0516 00:39:48.190081  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.190417  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:48.190462  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:48.190632  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.190660  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-21.159f02e6e274bbdd: (3.080042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0516 00:39:48.190678  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.190964  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (1.165565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53396]
I0516 00:39:48.192060  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (1.076082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0516 00:39:48.192170  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (1.186397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53404]
I0516 00:39:48.192289  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (975.879µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53402]
I0516 00:39:48.192360  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.193024  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:48.193042  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:48.193129  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.193163  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.193291  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-47.159f02e6ef769b58: (2.12139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53396]
I0516 00:39:48.194730  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (1.666195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53404]
I0516 00:39:48.194835  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (1.341832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0516 00:39:48.195551  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (1.793723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53396]
I0516 00:39:48.195564  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.195888  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-22.159f02e6e418dec0: (1.892854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53406]
I0516 00:39:48.196021  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:48.196041  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:48.196134  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.196199  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.197456  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (2.132922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53404]
I0516 00:39:48.197456  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (1.022769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0516 00:39:48.197462  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (1.082332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53396]
I0516 00:39:48.198219  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.198418  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-48.159f02e6f019b12a: (1.639169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53408]
I0516 00:39:48.198814  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:48.198833  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:48.198943  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.199039  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.199476  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (950.782µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53396]
I0516 00:39:48.200125  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (887.681µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0516 00:39:48.200179  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (738.682µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.200359  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.200627  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-49.159f02e6f06a8d90: (1.486326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53408]
I0516 00:39:48.200630  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:48.200651  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:48.200732  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.200772  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.201049  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (784.209µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53396]
I0516 00:39:48.201812  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (912.08µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.202062  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (992.049µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0516 00:39:48.202090  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.202312  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:48.202365  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:48.202500  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.202584  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.202633  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (937.438µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.203324  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-26.159f02e6e65481e1: (1.984076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53396]
I0516 00:39:48.204383  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (1.335436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0516 00:39:48.204383  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (1.097142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.204666  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (1.138244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.204697  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.205852  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (1.07662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53400]
I0516 00:39:48.206198  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-27.159f02e6e6ad8dd2: (1.417651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53396]
I0516 00:39:48.207669  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (1.012228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.209099  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-30.159f02e6e7d480c4: (2.232367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.209144  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (960.128µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.210863  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (1.406258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.211954  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-31.159f02e6e868b4c5: (2.313148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.212617  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (1.343511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.214374  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (1.19397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.214895  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-37.159f02e6eb402d6d: (2.282527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.215635  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (883.775µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.215832  108888 preemption_test.go:598] Cleaning up all pods...
I0516 00:39:48.217015  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-40.159f02e6ed29ae51: (1.520988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.218637  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:48.218674  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:48.219943  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-41.159f02e6ed6ce44f: (1.860456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.220655  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (4.643585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.222684  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-42.159f02e6edce85d4: (2.213908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.223773  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:48.223802  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:48.224370  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.237684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.225582  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (4.373666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.227876  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.865063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.229490  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:48.229528  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:48.231615  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.762985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.231820  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (5.706599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.235527  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:48.235581  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:48.237521  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (5.007389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.238287  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.462622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.241208  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:48.241247  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:48.243065  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (4.813381ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.243198  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.681296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.246524  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:48.246629  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:48.248006  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (4.470508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.251882  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.890315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.252160  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:48.252272  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:48.254765  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (6.390667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.255045  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.416957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.258392  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:48.258434  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:48.261139  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.381184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.261334  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (5.843663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.265452  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:48.265572  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:48.270591  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (8.843441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.271601  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.586807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.274095  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:48.274255  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:48.275446  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (4.469237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.277610  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.821428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.278750  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:48.278859  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:48.281969  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.598963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.283221  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (7.419839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.286234  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:48.286273  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:48.287715  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (4.173005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.288801  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.193418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.291923  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:48.291968  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:48.293287  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (4.271956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.293940  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.677639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.296346  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:48.296437  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:48.297972  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (4.194851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.298206  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.439987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.301354  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:48.301575  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:48.302488  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (3.822078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.303317  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.440383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.306103  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:48.306219  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:48.308105  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (5.200851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.310700  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.027469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.312248  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:48.312283  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:48.313786  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (4.998611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.314370  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.613882ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.317344  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:48.317385  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:48.318607  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (4.445775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.319212  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.579036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.321997  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:48.322032  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:48.323592  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (4.474262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.325220  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.842427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.327306  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:48.328684  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:48.328851  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (4.731805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.331257  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.927725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.332387  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:48.332470  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:48.334186  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (4.50742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.336230  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.426119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.338644  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:48.338682  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:48.341856  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (6.99706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.342067  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.012538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.345424  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:48.345643  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:48.347276  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (4.895585ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.351237  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:48.351272  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:48.352120  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.069424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.352956  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (5.123544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.356548  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:48.356591  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:48.356693  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.250206ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.357843  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (4.604716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.359392  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.896909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.361318  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:48.361363  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:48.362934  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (4.727043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.363311  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.283389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.366813  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:48.366872  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:48.369995  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.73726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.370540  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (6.784817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.373546  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:48.373616  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:48.375203  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.353361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.376643  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (5.802629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.380103  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:48.380175  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:48.382219  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (5.245498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.382984  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.597978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.386316  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:48.386442  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:48.386994  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (4.25687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.389473  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.737384ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.393020  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:48.393062  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:48.395194  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (7.909179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.395651  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.309242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.399019  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:48.399060  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:48.401132  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.714099ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.402568  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (6.779907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.406625  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:48.406665  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:48.409123  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.917855ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.411114  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (8.061881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.415198  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:48.415287  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:48.416199  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (4.77395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.418971  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.316952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.419950  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:48.419981  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:48.421808  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.611881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.423371  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (6.6294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.426974  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:48.427026  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:48.430169  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (6.395906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.430455  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.029475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.433442  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:48.433496  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:48.436149  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (5.643833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.436159  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.36019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.440508  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:48.440597  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:48.441519  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (4.614598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.442899  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.994136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.446016  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:48.446056  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:48.448007  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (4.919387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.448422  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.105607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.452158  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39
I0516 00:39:48.452244  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39
I0516 00:39:48.453661  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (5.279974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.455176  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.545961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.458478  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:48.458520  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:48.459638  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (5.014158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.461443  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.575373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.462998  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:48.463143  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:48.463729  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (3.596225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.465234  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.673159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.466803  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:48.466838  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:48.468445  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (4.387858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.470112  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.963064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.471766  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43
I0516 00:39:48.471830  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43
I0516 00:39:48.474137  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.97059ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.474590  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (5.862687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.478499  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44
I0516 00:39:48.478611  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44
I0516 00:39:48.481029  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.060379ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.481487  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (6.051135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.484697  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45
I0516 00:39:48.484778  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45
I0516 00:39:48.487046  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.817818ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.489969  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (8.134008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.494313  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:48.494358  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:48.496285  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.572141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.497012  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (6.575459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.500482  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47
I0516 00:39:48.500526  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47
I0516 00:39:48.501712  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (4.338907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.502761  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.937335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.505568  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48
I0516 00:39:48.505610  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48
I0516 00:39:48.506419  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (4.134001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.509278  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.205294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.511834  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:48.511902  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:48.513178  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (3.975702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.515065  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0: (1.031743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.515185  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.910267ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.519487  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-1: (3.945147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.523597  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (3.762028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.526198  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (1.018894ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.529393  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (1.067239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.532224  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (1.161782ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.534958  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (1.007517ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.537631  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (1.092632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.540287  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (1.090716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.542906  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (1.01413ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.545540  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (996.28µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.548336  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (1.253918ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.550960  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (988.07µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.554170  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (1.663473ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.556675  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (1.006717ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.559443  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (1.177647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.562165  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (1.10687ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.565302  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (1.314392ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.569092  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (1.5937ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.572298  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (1.09643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.575218  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (1.160927ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.577843  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (1.036072ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.580659  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (1.192544ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.583349  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.10868ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.586133  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (1.122471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.589224  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.560277ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.591776  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.062041ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.594578  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (1.151734ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.597776  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (1.154625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.619789  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (20.370916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.623263  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (1.350626ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.626252  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (1.369513ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.631589  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (2.981866ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.635348  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (1.255186ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.638025  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (1.157122ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.640999  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (1.247898ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.643990  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.434716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.646552  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (1.032382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.649643  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (1.375475ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.652106  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (969.452µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.654879  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (1.124366ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.657872  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (1.269022ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.660556  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (1.096901ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.663185  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (1.02672ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.665608  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (927.26µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.670157  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (1.690679ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.672905  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (1.161219ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.676414  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (1.144721ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.679434  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (1.262294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.682188  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (1.101234ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.684809  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (1.04695ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.689932  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (3.452317ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.692742  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (1.156621ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.695417  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0: (1.110664ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.698247  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-1: (1.231128ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.700708  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (961.601µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.702837  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.696251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.703075  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0
I0516 00:39:48.703097  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0
I0516 00:39:48.703205  108888 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0", node "node1"
I0516 00:39:48.703223  108888 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0", node "node1": all PVCs bound and nothing to do
I0516 00:39:48.703267  108888 factory.go:711] Attempting to bind rpod-0 to node1
I0516 00:39:48.705079  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0/binding: (1.48242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.705811  108888 scheduler.go:589] pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0516 00:39:48.705874  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1
I0516 00:39:48.705894  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1
I0516 00:39:48.706020  108888 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1", node "node1"
I0516 00:39:48.706040  108888 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1", node "node1": all PVCs bound and nothing to do
I0516 00:39:48.706120  108888 factory.go:711] Attempting to bind rpod-1 to node1
I0516 00:39:48.706364  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.103154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.708517  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-1/binding: (1.921329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.708767  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.230449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.708780  108888 scheduler.go:589] pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0516 00:39:48.710643  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.385253ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53410]
I0516 00:39:48.811884  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0: (1.885328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.914675  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-1: (2.040431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.915047  108888 preemption_test.go:561] Creating the preemptor pod...
I0516 00:39:48.917218  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.896508ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.917449  108888 preemption_test.go:567] Creating additional pods...
I0516 00:39:48.917877  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod
I0516 00:39:48.917903  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod
I0516 00:39:48.918035  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.918082  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.919862  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.165112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
I0516 00:39:48.920907  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.78626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53460]
I0516 00:39:48.921157  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod/status: (2.318477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.921815  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.681996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53412]
E0516 00:39:48.922088  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:48.922111  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.583198ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53464]
I0516 00:39:48.922624  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.073117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.922894  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0516 00:39:48.923014  108888 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0516 00:39:48.923032  108888 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0516 00:39:48.924030  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.526582ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53462]
I0516 00:39:48.924762  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod/status: (1.439429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.925834  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.350391ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53462]
I0516 00:39:48.927994  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.358675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53462]
I0516 00:39:48.928500  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0: (3.356164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.929240  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod
I0516 00:39:48.929254  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod
I0516 00:39:48.929384  108888 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod", node "node1"
I0516 00:39:48.929445  108888 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0516 00:39:48.929496  108888 factory.go:711] Attempting to bind preemptor-pod to node1
I0516 00:39:48.929541  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:48.929556  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:48.929658  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.929700  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.932633  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.135516ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53462]
I0516 00:39:48.933135  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.260905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.934383  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (2.816214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53468]
I0516 00:39:48.934786  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod/binding: (1.984069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53460]
I0516 00:39:48.935195  108888 scheduler.go:589] pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0516 00:39:48.935196  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0/status: (3.218517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53466]
I0516 00:39:48.935449  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.875956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53462]
I0516 00:39:48.936498  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.923341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.950546  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (14.712806ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53462]
I0516 00:39:48.950729  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (15.111884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53460]
I0516 00:39:48.951100  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.951377  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:48.951393  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:48.951495  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (14.572686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.951494  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.951545  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.954018  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.857631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53462]
I0516 00:39:48.954031  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.842641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53470]
I0516 00:39:48.954319  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1/status: (2.580089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.954482  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (2.725805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53468]
I0516 00:39:48.957456  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (2.778727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.957714  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.957955  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.981546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53468]
I0516 00:39:48.958099  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:48.958125  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:48.958210  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.958254  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.960868  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.690752ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53474]
I0516 00:39:48.961255  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2/status: (2.754603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.961298  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (2.134294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53472]
I0516 00:39:48.961701  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.103928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53470]
I0516 00:39:48.963075  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (1.475358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.963343  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.963554  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:48.963574  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:48.963670  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.963732  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.970563  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (6.458921ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53474]
I0516 00:39:48.971256  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (5.978131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53476]
I0516 00:39:48.971304  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (8.972223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53470]
E0516 00:39:48.971518  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:48.972042  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3/status: (8.088298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.974310  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.487193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53476]
I0516 00:39:48.974318  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (1.85848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53414]
I0516 00:39:48.974620  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.975543  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:48.975557  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:48.975649  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.975687  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.980229  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.307383ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53488]
I0516 00:39:48.981715  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4/status: (3.800048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53474]
I0516 00:39:48.981988  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (7.226652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53476]
I0516 00:39:48.982059  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (3.437084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53486]
I0516 00:39:48.986139  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (3.714558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53474]
I0516 00:39:48.986226  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.42795ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53488]
I0516 00:39:48.986513  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.986728  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:48.986776  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:48.986931  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.986978  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.990196  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.483234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:48.990827  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (3.138221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53490]
I0516 00:39:48.990867  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5/status: (3.626787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53474]
I0516 00:39:48.991622  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.863639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53488]
I0516 00:39:48.992595  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (1.223424ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53490]
I0516 00:39:48.992825  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.993107  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:48.993125  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:48.993222  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.993260  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:48.995103  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (1.22282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0516 00:39:48.996653  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6/status: (3.158097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:48.996657  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.449197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53490]
I0516 00:39:48.996897  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.951545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0516 00:39:48.998465  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (1.320781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:48.998710  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:48.998871  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:48.998891  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:48.999031  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.51945ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53496]
I0516 00:39:48.999032  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:48.999114  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.001516  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.773136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53500]
I0516 00:39:49.001589  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (2.311774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.002002  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7/status: (2.558227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53494]
I0516 00:39:49.003625  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (1.159441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.003847  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.004206  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.361353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53498]
I0516 00:39:49.004395  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:49.004408  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:49.004556  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.004617  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.008063  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.683305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53508]
I0516 00:39:49.017345  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8/status: (6.525018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53500]
I0516 00:39:49.017608  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (12.882035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.021677  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (3.381642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.022229  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (3.303784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53508]
I0516 00:39:49.022955  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.023286  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:49.023308  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:49.023429  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.023474  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.023515  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (5.451877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53500]
E0516 00:39:49.022958  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.026620  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9/status: (2.851977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.026910  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (2.596169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53500]
I0516 00:39:49.028269  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.36638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53508]
I0516 00:39:49.029466  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.112724ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53558]
I0516 00:39:49.034315  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.505537ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53508]
I0516 00:39:49.039228  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.069111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53558]
I0516 00:39:49.041578  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (14.270048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.041839  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.042751  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.362683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53558]
I0516 00:39:49.045087  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:49.045106  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:49.045242  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.045289  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.045936  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.913931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.047491  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (1.383947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53602]
I0516 00:39:49.047768  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.599152ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53604]
I0516 00:39:49.049355  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10/status: (3.820192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53500]
I0516 00:39:49.050737  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.39455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.051264  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (1.495718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53604]
I0516 00:39:49.052231  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.052493  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:49.052519  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:49.052639  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.052735  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.054399  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.116852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.055296  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (1.80877ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53604]
I0516 00:39:49.055618  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.055619  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (2.306703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53602]
I0516 00:39:49.055991  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:49.056063  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:49.056484  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.056594  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.058210  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.439147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.058299  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-3.159f02e765e5786f: (3.927736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53614]
I0516 00:39:49.060939  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11/status: (3.285561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53602]
I0516 00:39:49.061208  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (3.67726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53604]
E0516 00:39:49.061442  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.061994  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.082768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.065081  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (3.241427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53602]
I0516 00:39:49.065573  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (5.686197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53614]
I0516 00:39:49.066048  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.066393  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:49.066409  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:49.066528  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.066631  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.068288  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.120875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.070253  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (3.088733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53604]
I0516 00:39:49.070893  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.099709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.071967  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.245269ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53630]
I0516 00:39:49.072378  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12/status: (4.828476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53602]
I0516 00:39:49.073478  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.123344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.078003  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (2.924103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53630]
I0516 00:39:49.078330  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.078610  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:49.078641  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.057691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.078656  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:49.079038  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.079108  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.082613  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.686382ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0516 00:39:49.083307  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13/status: (3.888262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53630]
I0516 00:39:49.087738  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.898205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53492]
I0516 00:39:49.087980  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (3.368003ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53604]
I0516 00:39:49.087741  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (4.011444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53630]
I0516 00:39:49.089561  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.093934  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:49.093962  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:49.094097  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.094146  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.097570  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (1.459357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0516 00:39:49.102675  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (7.412963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53648]
I0516 00:39:49.104022  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14/status: (7.514694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53642]
I0516 00:39:49.105111  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (14.851908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53630]
I0516 00:39:49.110418  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (5.182805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53648]
I0516 00:39:49.110724  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.111397  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.195136ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53630]
I0516 00:39:49.112151  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:49.112174  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:49.112282  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.112334  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.114996  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.814104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53652]
I0516 00:39:49.115083  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.209748ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53648]
I0516 00:39:49.115247  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (1.954101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53650]
I0516 00:39:49.115323  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15/status: (2.699651ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53646]
I0516 00:39:49.116721  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (962.364µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53648]
I0516 00:39:49.117434  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.117594  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:49.117607  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:49.117684  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.117705  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.918806ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53652]
I0516 00:39:49.117722  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.120451  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16/status: (2.463444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53648]
I0516 00:39:49.121763  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.509046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53650]
I0516 00:39:49.122749  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (2.17166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53654]
E0516 00:39:49.123058  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.124302  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (1.150264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53648]
I0516 00:39:49.124587  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.125254  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.419528ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53652]
I0516 00:39:49.126457  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:49.126474  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:49.126587  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.126639  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.128242  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.502674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53654]
I0516 00:39:49.131102  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.4156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53658]
I0516 00:39:49.133217  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.00897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53660]
I0516 00:39:49.133511  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17/status: (5.832896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53650]
I0516 00:39:49.133807  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (5.099413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53654]
I0516 00:39:49.136829  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (1.260046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53658]
I0516 00:39:49.137095  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.398359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0516 00:39:49.137169  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.137583  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:49.137605  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:49.137682  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.137724  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.140123  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (1.382008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53662]
I0516 00:39:49.140506  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.819579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0516 00:39:49.140632  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.720337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53664]
I0516 00:39:49.140695  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18/status: (2.721629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53658]
I0516 00:39:49.145231  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.193119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0516 00:39:49.145232  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (4.142173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53658]
I0516 00:39:49.145561  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.145761  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:49.145778  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:49.145870  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.145966  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.148034  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (1.724596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53662]
I0516 00:39:49.148228  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.485735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0516 00:39:49.148550  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.148602  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-8.159f02e768550b15: (1.876869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53668]
I0516 00:39:49.148722  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:49.148766  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:49.148963  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.149057  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.149141  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (2.74885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53666]
I0516 00:39:49.150819  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (1.607368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53662]
I0516 00:39:49.151631  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19/status: (2.17016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53666]
I0516 00:39:49.151653  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.60148ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
I0516 00:39:49.151886  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.820957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0516 00:39:49.153547  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (1.522348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53662]
I0516 00:39:49.153600  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.328236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53656]
I0516 00:39:49.153793  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.154000  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:49.154018  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:49.154154  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.154197  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.155587  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.60569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
I0516 00:39:49.156739  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.980161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53674]
I0516 00:39:49.156903  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20/status: (2.45772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53672]
I0516 00:39:49.157525  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.548361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
E0516 00:39:49.157599  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.158760  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.485724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53672]
I0516 00:39:49.158875  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.061222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53676]
I0516 00:39:49.159054  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.159224  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:49.159244  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:49.159324  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.159365  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.160991  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (1.017099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53674]
I0516 00:39:49.161956  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21/status: (2.364623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
I0516 00:39:49.162046  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.037484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53678]
I0516 00:39:49.163498  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (1.070089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
I0516 00:39:49.163775  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.163984  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:49.164004  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:49.164125  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.164173  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.165668  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.258132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53674]
I0516 00:39:49.166169  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22/status: (1.736786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
I0516 00:39:49.167484  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (993.405µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
I0516 00:39:49.167791  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.168021  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:49.168347  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:49.168513  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.168570  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.168217  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.997873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0516 00:39:49.172046  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.484716ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0516 00:39:49.172117  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23/status: (3.269126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
I0516 00:39:49.172129  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (2.882811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53674]
E0516 00:39:49.172371  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.175179  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.328679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
I0516 00:39:49.175413  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.175495  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:49.175599  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:49.175614  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:49.175741  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.175783  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.176865  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:49.176909  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:49.177185  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:49.178423  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:49.181012  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (4.851189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0516 00:39:49.181491  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.212673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53682]
I0516 00:39:49.183270  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24/status: (7.195573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53670]
I0516 00:39:49.185167  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (1.316066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53682]
I0516 00:39:49.185401  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.185624  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:49.185637  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:49.185734  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.185784  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.188377  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (2.295531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53682]
I0516 00:39:49.188550  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.261533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0516 00:39:49.190953  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25/status: (2.286867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53682]
I0516 00:39:49.192979  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (1.343103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53684]
I0516 00:39:49.193208  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.193423  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:49.193447  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:49.193605  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.193655  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.195807  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (1.470881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0516 00:39:49.196197  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26/status: (2.28275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53684]
I0516 00:39:49.197886  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (1.22718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53684]
I0516 00:39:49.198153  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.198639  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.538812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53686]
I0516 00:39:49.198853  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:49.198887  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:49.199009  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.199077  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.200442  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (1.080355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0516 00:39:49.201692  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.900168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53688]
I0516 00:39:49.202306  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27/status: (2.996314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53684]
I0516 00:39:49.204407  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (1.567272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53688]
I0516 00:39:49.204688  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.204946  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:49.204964  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:49.205040  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.205081  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.207660  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (1.907567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0516 00:39:49.207981  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28/status: (2.112495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53688]
I0516 00:39:49.209758  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.837145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53690]
I0516 00:39:49.212829  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (3.715611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53688]
I0516 00:39:49.213167  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.213320  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:49.213332  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:49.213418  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.213459  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.217743  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (4.041995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53690]
I0516 00:39:49.218001  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-11.159f02e76b6e69cb: (3.17577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53692]
I0516 00:39:49.218357  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (4.051687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53680]
I0516 00:39:49.219566  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.219991  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:49.220009  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:49.220156  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.220206  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.222612  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (1.702845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53690]
I0516 00:39:49.224580  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29/status: (3.573559ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53692]
I0516 00:39:49.226878  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (1.726809ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53692]
I0516 00:39:49.226893  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (5.009353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53694]
I0516 00:39:49.227259  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.227487  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:49.227503  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:49.227606  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.227664  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.229250  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (1.232073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53690]
I0516 00:39:49.229803  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30/status: (1.67192ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53694]
I0516 00:39:49.231457  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (1.014786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53694]
I0516 00:39:49.231676  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.449129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53690]
I0516 00:39:49.231805  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.232022  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:49.232063  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:49.232168  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.232272  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.235385  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (2.272502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53696]
I0516 00:39:49.235465  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.27642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53698]
I0516 00:39:49.235715  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31/status: (3.213042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53694]
E0516 00:39:49.235824  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.237447  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (1.099982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53696]
I0516 00:39:49.238150  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.240429  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:49.240454  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:49.240589  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.240638  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.243179  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (1.541421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53698]
I0516 00:39:49.243514  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32/status: (2.6088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53696]
I0516 00:39:49.244400  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.646331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53700]
I0516 00:39:49.245572  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (1.477608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53696]
I0516 00:39:49.245891  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.246060  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:49.246078  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:49.246177  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.246225  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.247694  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.121138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53698]
I0516 00:39:49.248378  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33/status: (1.796762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53700]
I0516 00:39:49.248803  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.389702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53702]
E0516 00:39:49.249841  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.250476  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.560438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53700]
I0516 00:39:49.250761  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.250981  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:49.250998  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:49.251100  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.251178  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.252873  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (1.167571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53698]
I0516 00:39:49.254138  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.224284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53704]
I0516 00:39:49.254527  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34/status: (2.791603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53702]
E0516 00:39:49.254609  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.256206  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (1.167876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53698]
I0516 00:39:49.256477  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.256885  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:49.256910  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:49.257095  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.257175  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.260127  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35/status: (2.460353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53698]
I0516 00:39:49.260841  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (2.069186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53708]
I0516 00:39:49.260897  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.69973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53706]
E0516 00:39:49.261182  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.262004  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.460325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53704]
I0516 00:39:49.263750  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (3.029483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53698]
I0516 00:39:49.264297  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.264475  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:49.264497  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:49.264630  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.264756  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.272079  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36/status: (6.954099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53708]
I0516 00:39:49.272175  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (6.714994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53710]
I0516 00:39:49.277653  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (12.54907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53706]
I0516 00:39:49.277744  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (5.12663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53708]
I0516 00:39:49.278074  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0516 00:39:49.278083  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.278285  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:49.278306  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:49.278417  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.278881  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.281653  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (2.113741ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53708]
I0516 00:39:49.281949  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.407794ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53712]
I0516 00:39:49.284356  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37/status: (5.147604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53710]
I0516 00:39:49.287869  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (2.977516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53712]
I0516 00:39:49.288239  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.288389  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:49.288450  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:49.289046  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.289102  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.292174  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38/status: (2.802463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53712]
I0516 00:39:49.292282  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.280994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53714]
I0516 00:39:49.292564  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (2.697479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53708]
E0516 00:39:49.293017  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.294762  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (1.843793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53712]
I0516 00:39:49.295141  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.295306  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:49.295323  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:49.295405  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.295453  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.298370  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-16.159f02e76f132dc4: (2.002306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53716]
I0516 00:39:49.298412  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (2.016884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53714]
I0516 00:39:49.298641  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (2.896943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53708]
I0516 00:39:49.298984  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.299181  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39
I0516 00:39:49.299204  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39
I0516 00:39:49.299423  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.299523  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.301950  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39/status: (2.047226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53716]
I0516 00:39:49.301994  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (2.202743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53714]
I0516 00:39:49.302825  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.6289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.303580  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (1.193236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53714]
I0516 00:39:49.303843  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.304047  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:49.304072  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:49.304146  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.304191  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.306224  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40/status: (1.804824ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.306691  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (1.475537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53716]
E0516 00:39:49.306941  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.307750  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (1.172393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.308345  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.308621  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:49.308644  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:49.308748  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.308798  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.310516  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (5.605941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53720]
I0516 00:39:49.311200  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41/status: (2.14869ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.311285  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (2.134076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53716]
E0516 00:39:49.311690  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.313033  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.602629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53720]
I0516 00:39:49.313764  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (1.507529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53716]
I0516 00:39:49.314081  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.314421  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:49.314469  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:49.314678  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.314763  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.317241  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42/status: (1.804147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53722]
I0516 00:39:49.317244  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.08815ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.317631  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (2.553919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53720]
I0516 00:39:49.319514  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (1.79065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53722]
I0516 00:39:49.319880  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.320063  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43
I0516 00:39:49.320081  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43
I0516 00:39:49.320180  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.320225  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.322273  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (1.262478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.322852  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43/status: (2.387755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53720]
I0516 00:39:49.324100  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.473559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53724]
I0516 00:39:49.324809  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (1.422314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53720]
I0516 00:39:49.325112  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.325321  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44
I0516 00:39:49.325352  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44
I0516 00:39:49.325469  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.325519  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.327614  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (1.486282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.328350  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.467064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53726]
I0516 00:39:49.329657  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44/status: (3.604801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53724]
I0516 00:39:49.331170  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (1.040762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53726]
I0516 00:39:49.331462  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.331708  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45
I0516 00:39:49.331727  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45
I0516 00:39:49.331850  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.331905  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.334280  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (1.785063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53726]
I0516 00:39:49.334545  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45/status: (2.093105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.336016  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (1.140936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.336548  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.336696  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:49.336709  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:49.336770  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.336811  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.338010  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.195322ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53726]
I0516 00:39:49.348473  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46/status: (11.408934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.348473  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (11.355549ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53728]
I0516 00:39:49.348819  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (9.622333ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53726]
E0516 00:39:49.348935  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.350577  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (1.292423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.350852  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.351079  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47
I0516 00:39:49.351098  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47
I0516 00:39:49.351191  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.351234  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.355282  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47/status: (3.764359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.355812  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.729477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53730]
I0516 00:39:49.355845  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (3.828641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53728]
I0516 00:39:49.357514  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (1.684171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53718]
I0516 00:39:49.357819  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.358085  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48
I0516 00:39:49.358104  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48
I0516 00:39:49.358217  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.358266  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.360074  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (1.175426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53730]
I0516 00:39:49.361763  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.991346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53732]
I0516 00:39:49.362061  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48/status: (3.173884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53728]
I0516 00:39:49.364640  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (2.365274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53732]
I0516 00:39:49.364871  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (2.298602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53728]
I0516 00:39:49.365181  108888 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0516 00:39:49.365587  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.365787  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:49.365813  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:49.365989  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.366045  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.366987  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (1.640982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53732]
I0516 00:39:49.368822  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (2.30577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53730]
I0516 00:39:49.368832  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.433441ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0516 00:39:49.369123  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.369527  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:49.369583  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:49.369691  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.369784  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.370466  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (1.849346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53732]
I0516 00:39:49.370614  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-20.159f02e7713fc211: (3.210936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53736]
I0516 00:39:49.372057  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49/status: (1.978399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53730]
I0516 00:39:49.371668  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (1.596922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
E0516 00:39:49.374496  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:49.375050  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (2.494673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53732]
I0516 00:39:49.375131  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (2.679226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53736]
I0516 00:39:49.375410  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.375736  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:49.375775  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:49.375882  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.375959  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.377124  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.720067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53738]
I0516 00:39:49.377988  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.519925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0516 00:39:49.378205  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.378413  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:49.378429  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:49.379107  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (2.529848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53740]
I0516 00:39:49.379745  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (4.229562ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53732]
I0516 00:39:49.380145  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-23.159f02e7721af8d6: (2.450321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53738]
I0516 00:39:49.380475  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.380525  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.382161  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (1.853341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53740]
I0516 00:39:49.383549  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (2.744102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0516 00:39:49.383840  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-24.159f02e772892283: (2.498324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0516 00:39:49.384017  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (3.260701ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53738]
I0516 00:39:49.384327  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.384526  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:49.384557  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:49.384706  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.384768  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.385799  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (3.112697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53740]
I0516 00:39:49.386584  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (1.523722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0516 00:39:49.387368  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (2.272225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53742]
I0516 00:39:49.387657  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-31.159f02e775e63874: (2.177669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53744]
I0516 00:39:49.387743  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.387937  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:49.387978  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:49.388164  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.388244  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.389525  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.09812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53740]
I0516 00:39:49.390000  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (1.676638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0516 00:39:49.390892  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (2.076852ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.391170  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.391309  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-33.159f02e776bbf057: (2.19693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53748]
I0516 00:39:49.391511  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:49.391605  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:49.391751  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.391832  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.392724  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (2.324085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0516 00:39:49.396153  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (2.129717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0516 00:39:49.396215  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-34.159f02e777078cc3: (2.397395ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53750]
I0516 00:39:49.396830  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (4.456217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.397214  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (5.112992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53740]
I0516 00:39:49.397526  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.397781  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:49.397837  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:49.397943  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (1.258117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53734]
I0516 00:39:49.397970  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.398014  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.399765  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (1.324969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.400091  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.400394  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (1.529454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.400650  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:49.400666  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:49.400776  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.400811  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.401681  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (3.260352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53750]
I0516 00:39:49.402759  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (1.795337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.402899  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (1.876024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.403206  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.403389  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:49.403406  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:49.403496  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.403538  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.404158  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-35.159f02e77762f2b8: (5.258482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53752]
I0516 00:39:49.404560  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (1.666661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53750]
I0516 00:39:49.430217  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (26.404895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.430217  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (25.115572ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53752]
I0516 00:39:49.430545  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-36.159f02e777d69183: (25.53848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53750]
I0516 00:39:49.430674  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (26.41643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.431334  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.431628  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:49.431665  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:49.431799  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.431843  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.434087  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (2.265497ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.438871  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-38.159f02e7794a27cd: (6.852914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.439833  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (7.076601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53756]
I0516 00:39:49.440075  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (7.629898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53758]
I0516 00:39:49.440306  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (5.906734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.440379  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.440590  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:49.440609  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:49.440698  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.440745  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.442447  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (1.399797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53756]
I0516 00:39:49.443341  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (2.419302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.443626  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.443800  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:49.443816  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:49.443942  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.443995  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.445455  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-40.159f02e77a307df8: (5.820538ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.445910  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (1.629028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53756]
I0516 00:39:49.446219  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (4.329032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53760]
I0516 00:39:49.446575  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (2.424309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.446820  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.447123  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:49.447145  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:49.447248  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:49.447302  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:49.449244  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (1.052898ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.449403  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-41.159f02e77a76adae: (2.734393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53756]
I0516 00:39:49.450249  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (2.413954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.450332  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (1.795689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53762]
I0516 00:39:49.450542  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (878.996µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53756]
I0516 00:39:49.450740  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:49.452909  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-46.159f02e77c223ad3: (2.616233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.453253  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (2.118684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53762]
I0516 00:39:49.455188  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (1.204232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.456283  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-49.159f02e77e191418: (2.314417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.456953  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.32955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.468184  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (10.78219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.470722  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.924234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.474290  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (2.213765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.477124  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (2.353365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.479115  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (1.504166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.481015  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (1.407324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.482581  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (1.151158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.486260  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (3.127693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.488231  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (1.466017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.489831  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (1.138078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.491402  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (1.171435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.493795  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (1.994653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.496063  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.804164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.497689  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (1.161478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.499422  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (1.217662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.501209  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (1.161832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.508503  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (6.345477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.510657  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (1.582987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.513663  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (2.474586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.515312  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (1.252375ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.516726  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (1.086387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.520325  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (3.184005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.521949  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (1.168402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.523450  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (1.075755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.524992  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (1.108347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.528586  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (3.129484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.530579  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (1.497672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.533280  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (2.318982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.534773  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (1.056567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.535120  108888 preemption_test.go:598] Cleaning up all pods...
I0516 00:39:49.538497  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:49.538542  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:49.540629  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (5.014428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.540698  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.866895ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.548453  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:49.548540  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:49.551437  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.464301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.552247  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (10.942309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.555738  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:49.555870  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:49.557734  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.503959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.557996  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (5.201097ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.561241  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:49.561281  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:49.569965  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (8.395929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.572029  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (13.682313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.576461  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:49.576526  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:49.577300  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (4.823865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.579476  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.241184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.581192  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:49.581231  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:49.582295  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (4.686653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.587372  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (5.80061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.590195  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:49.590279  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:49.591052  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (8.366033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.593994  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:49.594031  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:49.595007  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.343712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.597147  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (5.768964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.597635  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.163953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.601805  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:49.601902  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:49.604685  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (6.921157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.610067  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.960581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.612186  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:49.612955  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:49.613597  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (8.253492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.615797  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.604348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.617471  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:49.617514  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:49.625454  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (7.630712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.627815  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (13.688248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.631118  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:49.631225  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:49.633899  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.103255ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.633972  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (5.658923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.644895  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:49.644983  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:49.646461  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (11.92758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.647154  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.74846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.649906  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:49.650013  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:49.660235  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (9.974248ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.661548  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (14.444568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.668116  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:49.668209  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:49.669491  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (7.46536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.670819  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.22705ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.673233  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:49.673268  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:49.674698  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (4.721488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.675203  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.697231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.682693  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:49.682769  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:49.684089  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (8.670447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.685005  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.862102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.687328  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:49.687395  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:49.688717  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (4.202999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.689274  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.580165ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.691681  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:49.691716  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:49.692851  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (3.738962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.693517  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.533212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.696346  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:49.696391  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:49.700750  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.065979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.702420  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (9.121517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.705972  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:49.706007  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:49.707451  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (4.569759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.708018  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.732426ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.711171  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:49.711203  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:49.721677  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (10.229121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.721744  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (13.902427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.725348  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:49.725385  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:49.727417  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (5.323607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.727729  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.073258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.730739  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:49.730883  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:49.734678  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.451813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.735549  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (7.36049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.738844  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:49.738893  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:49.752223  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (16.35378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.752224  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (13.083596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.757142  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:49.757336  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:49.762827  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (5.10589ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.763612  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (10.553941ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.767292  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:49.767428  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:49.769037  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (4.903078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.775771  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (7.84046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.777620  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:49.777674  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:49.779257  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (9.75738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.779995  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.985146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.782753  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:49.782793  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:49.784523  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (4.876366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.785497  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.424848ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.795182  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:49.795287  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:49.797416  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (12.252668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.798207  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.554774ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.801402  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:49.801442  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:49.804312  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.577726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.805587  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (7.449212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.810033  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:49.810081  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:49.814040  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.493971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.815896  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (9.8623ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.823458  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:49.823502  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:49.825387  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.518614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.825565  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (9.257301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.829227  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:49.829269  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:49.830425  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (4.165649ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.837824  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (8.149239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.840186  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:49.840296  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:49.842086  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (11.299639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.844762  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.082348ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.846772  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:49.846862  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:49.848306  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (5.196775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.850407  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.229809ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.853177  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:49.853258  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:49.855120  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.5499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.856426  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (6.147235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.859944  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:49.859986  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:49.861804  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (4.928386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.862873  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.890246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.865644  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:49.865743  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:49.867400  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (5.066463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.868019  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.838046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.870770  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39
I0516 00:39:49.870813  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39
I0516 00:39:49.872491  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (4.491138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.874523  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.28418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.877490  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:49.877557  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:49.879701  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.880991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.880556  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (7.737587ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.884971  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:49.886582  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:49.889693  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.574287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.892190  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (11.203676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.898042  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:49.898090  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:49.900074  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.496967ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.900623  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (6.322641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.905298  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (4.330808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.909591  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (3.93316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.911062  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44
I0516 00:39:49.911143  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44
I0516 00:39:49.913026  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45
I0516 00:39:49.913059  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45
I0516 00:39:49.915579  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.878151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.916672  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (6.56146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.917865  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.528778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.922526  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:49.922597  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:49.924770  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (6.540973ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.926600  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.406853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.929588  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47
I0516 00:39:49.929633  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47
I0516 00:39:49.930410  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (5.178671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.931668  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.710215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.938563  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48
I0516 00:39:49.938649  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48
I0516 00:39:49.950206  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (11.22612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.950219  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (19.07031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.955024  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:49.955076  108888 scheduler.go:448] Skip schedule deleting pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:49.960668  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.937437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:49.963273  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (12.515223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.965989  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0: (1.905288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.977667  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-1: (11.130565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.985277  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (6.909536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.988718  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (1.151609ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.991340  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (1.041241ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.995309  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (2.122099ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:49.998410  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (1.39234ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.002025  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (1.883869ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.005611  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (1.788011ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.009136  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (1.815701ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.016063  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (4.782824ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.020391  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (1.701726ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.023677  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (1.315326ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.026875  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (1.58856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.029752  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (1.230067ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.032554  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (1.348583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.035489  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (1.276273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.039110  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (2.111751ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.041681  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (971.756µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.045633  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (1.9471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.048555  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (1.121332ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.051590  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (1.47076ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.055257  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (1.436672ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.058151  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.346975ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.060814  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (1.118462ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.063734  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.137081ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.066586  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.251385ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.069578  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (1.31435ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.072400  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (1.223074ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.076569  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (1.4684ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.079383  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (1.166172ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.085221  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (1.577675ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.088616  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (1.641007ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.091627  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (1.331235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.094528  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (1.275765ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.097875  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (1.734624ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.101007  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.41945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.103845  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (1.187015ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.106599  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (1.182526ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.109538  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (1.29222ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.112823  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (1.206092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.115480  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (1.078365ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.118385  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (1.298082ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.128349  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (8.376769ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.132100  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (1.898856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.135647  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (1.674422ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.139123  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (1.732632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.142711  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (1.410356ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.145838  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (1.375811ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.148843  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (1.123732ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.151571  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (1.076483ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.154214  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (1.095549ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.156778  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (976.748µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.159389  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0: (1.074371ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.162000  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-1: (1.027461ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.164567  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.062792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.167005  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.939667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.167427  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0
I0516 00:39:50.167448  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0
I0516 00:39:50.167597  108888 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0", node "node1"
I0516 00:39:50.167617  108888 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0", node "node1": all PVCs bound and nothing to do
I0516 00:39:50.167671  108888 factory.go:711] Attempting to bind rpod-0 to node1
I0516 00:39:50.169156  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.635542ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.169358  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1
I0516 00:39:50.169380  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1
I0516 00:39:50.169499  108888 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1", node "node1"
I0516 00:39:50.169522  108888 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1", node "node1": all PVCs bound and nothing to do
I0516 00:39:50.169580  108888 factory.go:711] Attempting to bind rpod-1 to node1
I0516 00:39:50.169794  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0/binding: (1.831092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:50.170066  108888 scheduler.go:589] pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-0 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0516 00:39:50.172118  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.774146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:50.172542  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-1/binding: (2.537475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.172738  108888 scheduler.go:589] pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/rpod-1 is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0516 00:39:50.174570  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.579617ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.175617  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:50.177200  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:50.177261  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:50.177357  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:50.178589  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:50.275892  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0: (1.715176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.378667  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-1: (1.886201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.378994  108888 preemption_test.go:561] Creating the preemptor pod...
I0516 00:39:50.381909  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.612122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.382180  108888 preemption_test.go:567] Creating additional pods...
I0516 00:39:50.382575  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod
I0516 00:39:50.382595  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod
I0516 00:39:50.382708  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.382763  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.388945  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (2.673734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53774]
I0516 00:39:50.389202  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (5.019702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53776]
I0516 00:39:50.389306  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.938722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.389592  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod/status: (5.279805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53746]
I0516 00:39:50.392153  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.141981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.392454  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (2.0056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53774]
I0516 00:39:50.392940  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
E0516 00:39:50.393043  108888 utils.go:79] pod.Status.StartTime is nil for pod rpod-1. Should not reach here.
E0516 00:39:50.393051  108888 utils.go:79] pod.Status.StartTime is nil for pod rpod-0. Should not reach here.
I0516 00:39:50.396024  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod/status: (2.627078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53774]
I0516 00:39:50.396025  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.299471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.399078  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.353324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.402054  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.223956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.405167  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.68751ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.408652  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.787915ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.411405  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.231025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.411663  108888 wrap.go:47] DELETE /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/rpod-0: (14.744774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53774]
I0516 00:39:50.412112  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:50.412126  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:50.412234  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.412275  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.416233  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (2.259184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53778]
I0516 00:39:50.416237  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0/status: (3.130128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53776]
I0516 00:39:50.416442  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.644618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
E0516 00:39:50.417290  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:50.418741  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (1.406843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53776]
I0516 00:39:50.419036  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.419075  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.62636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53774]
I0516 00:39:50.419871  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.569871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.420953  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:50.420973  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:50.421141  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.421210  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.422897  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.850795ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53776]
I0516 00:39:50.423355  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.979711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.425115  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (3.031698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53780]
E0516 00:39:50.425340  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:50.425608  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.647022ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53776]
I0516 00:39:50.426069  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.92146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53754]
I0516 00:39:50.426450  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1/status: (3.960165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53778]
I0516 00:39:50.428825  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (1.862805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53778]
I0516 00:39:50.429100  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.538545ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53776]
I0516 00:39:50.429285  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.429942  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:50.429960  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:50.430113  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.430154  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.433256  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (1.948023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53782]
I0516 00:39:50.433594  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.606077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53778]
I0516 00:39:50.433797  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2/status: (3.114929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53780]
I0516 00:39:50.437155  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (2.975059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53780]
I0516 00:39:50.437416  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.437427  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.063299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53778]
I0516 00:39:50.437155  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.270344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53784]
I0516 00:39:50.438025  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:50.438224  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:50.438348  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.438394  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.442463  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3/status: (3.331644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53782]
I0516 00:39:50.442751  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.485587ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53780]
I0516 00:39:50.443062  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (3.67109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53786]
I0516 00:39:50.443451  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.626234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53788]
I0516 00:39:50.446186  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (1.893844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53786]
I0516 00:39:50.446450  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.446646  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:50.446668  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:50.446803  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.446852  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.448473  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.417244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53788]
I0516 00:39:50.450341  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (2.216914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53790]
I0516 00:39:50.450680  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4/status: (3.219173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53786]
I0516 00:39:50.450730  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.105919ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53792]
I0516 00:39:50.453201  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.26015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53788]
I0516 00:39:50.453238  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (2.193765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53786]
I0516 00:39:50.453636  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.454666  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:50.454696  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:50.454820  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.454870  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.458359  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.598443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53796]
I0516 00:39:50.458657  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5/status: (3.437514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53790]
I0516 00:39:50.458910  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.674406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53792]
I0516 00:39:50.460323  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (1.224138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53790]
I0516 00:39:50.460624  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.460838  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:50.460878  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:50.461195  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.461297  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.461552  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (5.463374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53794]
I0516 00:39:50.461972  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.705474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53792]
E0516 00:39:50.463351  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:50.464636  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6/status: (2.966935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53790]
I0516 00:39:50.464780  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.398715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53798]
I0516 00:39:50.464961  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (2.494606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53796]
I0516 00:39:50.465204  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.579598ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53792]
I0516 00:39:50.467188  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.447149ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53798]
I0516 00:39:50.467207  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (1.16993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53794]
I0516 00:39:50.467414  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.468394  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:50.468411  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:50.468503  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.468548  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.470569  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.794489ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53798]
I0516 00:39:50.471371  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7/status: (1.996672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53794]
I0516 00:39:50.472001  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.43326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53802]
I0516 00:39:50.472345  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (2.008053ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53800]
E0516 00:39:50.472624  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:50.474028  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (1.896848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53794]
I0516 00:39:50.474156  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.643459ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53798]
I0516 00:39:50.475614  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.476011  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:50.476057  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:50.477082  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.477161  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.476369  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.56455ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53800]
I0516 00:39:50.479583  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.759781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0516 00:39:50.481425  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (3.886149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53802]
I0516 00:39:50.481436  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.519842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53806]
I0516 00:39:50.483594  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.454913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53802]
I0516 00:39:50.486057  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8/status: (5.991546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53800]
I0516 00:39:50.486076  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.990143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53802]
I0516 00:39:50.487954  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (1.146527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53800]
I0516 00:39:50.488657  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.061765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53802]
I0516 00:39:50.489046  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.489330  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:50.489367  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:50.489451  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.489544  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.491166  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (1.403687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0516 00:39:50.492005  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.492333  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:50.492350  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:50.492465  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.492519  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.492756  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (1.960774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53808]
I0516 00:39:50.496165  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-0.159f02e7bc3c610d: (4.462393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53810]
I0516 00:39:50.496572  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9/status: (3.465107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0516 00:39:50.496861  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (3.714056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53808]
I0516 00:39:50.501549  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (3.002985ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0516 00:39:50.501904  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.502568  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.695741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53810]
I0516 00:39:50.503144  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:50.503169  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:50.503288  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.503326  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.508326  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10/status: (3.126954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0516 00:39:50.508633  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (3.771322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53808]
E0516 00:39:50.509372  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:50.511227  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (1.533975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53804]
I0516 00:39:50.511957  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (7.738238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53812]
I0516 00:39:50.512605  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.512892  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:50.512907  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:50.513029  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.513066  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.514611  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (1.326411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53812]
I0516 00:39:50.514980  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (1.218581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53808]
I0516 00:39:50.515201  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.515508  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:50.515522  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:50.515650  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.515689  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.519042  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (29.944118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53800]
I0516 00:39:50.518564  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-1.159f02e7bcc488f7: (3.835338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0516 00:39:50.521722  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (3.488427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53812]
I0516 00:39:50.521900  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11/status: (3.453445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53808]
I0516 00:39:50.524011  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (1.497581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53812]
I0516 00:39:50.524988  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.525204  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:50.525649  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:50.525839  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.525952  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.526024  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.65588ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0516 00:39:50.527386  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (1.009465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0516 00:39:50.527549  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.334128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53800]
I0516 00:39:50.528226  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.412549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53816]
I0516 00:39:50.528314  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12/status: (2.028155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53812]
I0516 00:39:50.529585  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.484851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53800]
I0516 00:39:50.529788  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (951.39µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53816]
I0516 00:39:50.530057  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.530233  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:50.530253  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:50.530363  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.530421  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.538942  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.560505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53820]
I0516 00:39:50.539499  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (2.13834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53818]
I0516 00:39:50.540672  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (10.254512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53800]
I0516 00:39:50.542818  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.624282ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53818]
I0516 00:39:50.543848  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13/status: (6.049099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0516 00:39:50.545358  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (1.11408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0516 00:39:50.545594  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.342389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53818]
I0516 00:39:50.545614  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.545800  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:50.545829  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:50.545988  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.546040  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.547875  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (1.168204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53824]
I0516 00:39:50.548842  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.749914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0516 00:39:50.549242  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14/status: (2.982669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53820]
I0516 00:39:50.549357  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.724661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53822]
I0516 00:39:50.551153  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.883845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0516 00:39:50.551650  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (2.002953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53820]
I0516 00:39:50.551882  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.552101  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:50.552119  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:50.552275  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.552322  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.553268  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.678171ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0516 00:39:50.555866  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.158054ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53814]
I0516 00:39:50.556071  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15/status: (3.163682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53820]
I0516 00:39:50.557295  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.188497ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53826]
I0516 00:39:50.557879  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (1.104589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53820]
I0516 00:39:50.558114  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.558329  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:50.558356  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:50.558465  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.558523  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.559075  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (6.183099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53824]
E0516 00:39:50.559399  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:50.561085  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.816681ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0516 00:39:50.561736  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (2.505763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53828]
I0516 00:39:50.563172  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16/status: (3.680297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53820]
I0516 00:39:50.563606  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.093795ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53826]
I0516 00:39:50.565362  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (1.673024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0516 00:39:50.566079  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.566304  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:50.566318  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:50.566420  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.566458  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.566464  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.24673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53826]
I0516 00:39:50.571078  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17/status: (3.699482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0516 00:39:50.571384  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.555749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53832]
I0516 00:39:50.571457  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (4.40805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53826]
I0516 00:39:50.571760  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (4.847846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53824]
I0516 00:39:50.574092  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (2.10975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0516 00:39:50.575277  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.172516ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53824]
I0516 00:39:50.575839  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.576268  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:50.576287  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:50.576377  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.576422  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.579079  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.592005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0516 00:39:50.581460  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.990259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0516 00:39:50.581879  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (3.404434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
I0516 00:39:50.582010  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.636922ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53836]
I0516 00:39:50.582836  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18/status: (5.508933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53832]
I0516 00:39:50.585248  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (1.766614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0516 00:39:50.585249  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (1.628319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
I0516 00:39:50.585528  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.585740  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:50.585757  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:50.585834  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.585881  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.589818  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-5.159f02e7bec6611b: (2.922748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53840]
I0516 00:39:50.590403  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (3.718884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0516 00:39:50.590641  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (3.344358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53838]
I0516 00:39:50.590797  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.590992  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:50.591007  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:50.591090  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.591127  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.591253  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (5.433836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
I0516 00:39:50.593171  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.371291ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53842]
I0516 00:39:50.594372  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19/status: (3.006092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53830]
I0516 00:39:50.594664  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (3.219374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53840]
E0516 00:39:50.595339  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:50.597123  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (1.498699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53842]
I0516 00:39:50.597455  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.597615  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:50.597658  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:50.597753  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.597798  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.598987  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (3.209057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
I0516 00:39:50.600156  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.332819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53844]
I0516 00:39:50.600737  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20/status: (2.650529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53842]
I0516 00:39:50.602850  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.862652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53848]
I0516 00:39:50.603501  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.051166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53842]
I0516 00:39:50.603736  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.604014  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:50.604061  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:50.604192  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.604264  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.605977  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods: (2.442235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0516 00:39:50.606686  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (7.371063ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
I0516 00:39:50.608132  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (2.227322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53844]
I0516 00:39:50.608615  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.421219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53834]
E0516 00:39:50.608765  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:50.609112  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21/status: (4.508008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53842]
I0516 00:39:50.610947  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (1.39674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53844]
I0516 00:39:50.611299  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.611490  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:50.611518  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:50.611641  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.611689  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.613575  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.60194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0516 00:39:50.614374  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22/status: (2.407076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53844]
I0516 00:39:50.614476  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.14256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53850]
I0516 00:39:50.616004  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.183876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53844]
I0516 00:39:50.616249  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.616435  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:50.616451  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:50.616544  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.616587  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.618786  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (2.006689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53844]
I0516 00:39:50.618787  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (1.748216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0516 00:39:50.619166  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.619358  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:50.619378  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:50.619550  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.619601  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.619564  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-7.159f02e7bf96feaa: (1.814847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53852]
I0516 00:39:50.621182  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.109234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0516 00:39:50.622174  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.919043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53852]
I0516 00:39:50.622184  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23/status: (2.345727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53844]
I0516 00:39:50.623779  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.157517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53852]
I0516 00:39:50.624081  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.624224  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:50.624243  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:50.624353  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.624399  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.626090  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (1.437777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0516 00:39:50.626858  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.876411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53854]
I0516 00:39:50.626861  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24/status: (2.242716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53852]
I0516 00:39:50.628626  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (1.138405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53854]
I0516 00:39:50.628876  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.629113  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:50.629131  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:50.629549  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.629598  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.631241  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (1.248655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0516 00:39:50.632112  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.846094ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53856]
I0516 00:39:50.633030  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25/status: (3.20127ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53854]
I0516 00:39:50.634724  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (1.066069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53856]
I0516 00:39:50.635122  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.635338  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:50.635362  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:50.635461  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.635513  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.637673  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (1.964433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53856]
I0516 00:39:50.638337  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26/status: (2.592759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0516 00:39:50.638477  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.249127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53858]
I0516 00:39:50.640402  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (1.181261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53846]
I0516 00:39:50.640701  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.640899  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:50.640915  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:50.641051  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.641107  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.655430  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (14.01878ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53856]
I0516 00:39:50.666574  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27/status: (25.15991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53858]
I0516 00:39:50.679266  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (37.526909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.689909  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (14.10967ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53858]
I0516 00:39:50.690933  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.691211  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:50.691263  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:50.691502  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.691577  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.695033  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (2.112904ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53856]
I0516 00:39:50.697716  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28/status: (5.674798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.701870  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.816671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53862]
I0516 00:39:50.704783  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (2.470768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.706043  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.706299  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:50.706316  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:50.706494  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.706565  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.713424  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (5.95196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53864]
I0516 00:39:50.713803  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (7.045187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.713967  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-10.159f02e7c1a9d932: (6.031816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53866]
I0516 00:39:50.714332  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (7.489516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53856]
I0516 00:39:50.714652  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.714872  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:50.714888  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:50.715035  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.715094  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.717632  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (1.945872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.720439  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29/status: (4.709833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53864]
I0516 00:39:50.720776  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.775228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53868]
I0516 00:39:50.723989  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (1.485059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53864]
I0516 00:39:50.724468  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.727963  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:50.728019  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:50.728264  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.728349  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.731051  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (1.881636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.732229  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30/status: (3.184299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53864]
I0516 00:39:50.735186  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.617343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.738477  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (4.682902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53864]
I0516 00:39:50.738866  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.739093  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:50.739112  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:50.739277  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.739343  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.741906  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (2.022104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.743071  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31/status: (2.839493ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53864]
I0516 00:39:50.743646  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.735611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53870]
I0516 00:39:50.747252  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (3.357901ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53864]
I0516 00:39:50.747634  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.757019  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:50.757077  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:50.757368  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.757444  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.760485  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (2.230645ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.761876  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.783102ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53872]
I0516 00:39:50.768967  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32/status: (10.875868ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53870]
I0516 00:39:50.771475  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (1.738223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53872]
I0516 00:39:50.772013  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.774186  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:50.774212  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:50.774367  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.774435  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.777203  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.97084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.778938  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33/status: (4.042965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53872]
I0516 00:39:50.779025  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.411365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53874]
I0516 00:39:50.781385  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.602924ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53874]
I0516 00:39:50.783092  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.789767  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:50.789811  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:50.790976  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.791114  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.796942  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.922294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.797522  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (5.722387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53874]
I0516 00:39:50.799594  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34/status: (2.817379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53876]
I0516 00:39:50.801854  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (1.649197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53874]
I0516 00:39:50.802115  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.802325  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:50.802348  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:50.802442  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.802492  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.804960  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.746674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53878]
I0516 00:39:50.805246  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35/status: (2.517674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53874]
I0516 00:39:50.805464  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (2.322031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.807482  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (1.408379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53874]
I0516 00:39:50.807801  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.807988  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:50.808010  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:50.808189  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.808242  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.810988  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (2.496195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53878]
I0516 00:39:50.812624  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36/status: (4.123974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.813186  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.708977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53880]
I0516 00:39:50.814582  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (1.113938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.815053  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.815233  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:50.815251  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:50.815339  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.815419  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.816284  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.384084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53880]
I0516 00:39:50.818106  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (2.060328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53878]
I0516 00:39:50.819090  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37/status: (3.435444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.820969  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (4.338902ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53880]
I0516 00:39:50.823293  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (2.663373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53860]
I0516 00:39:50.823625  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.823808  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:50.823825  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:50.823968  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.824020  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.825381  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (1.163314ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0516 00:39:50.825521  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (1.265738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53878]
I0516 00:39:50.825683  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.825852  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:50.825870  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38
I0516 00:39:50.825977  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.826017  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-38 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.826592  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-15.159f02e7c4957357: (1.794394ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53884]
I0516 00:39:50.828163  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (1.798386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0516 00:39:50.828244  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38/status: (1.810378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53878]
I0516 00:39:50.829124  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.022423ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53884]
I0516 00:39:50.830233  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (1.16632ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53878]
I0516 00:39:50.830578  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.830746  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39
I0516 00:39:50.830767  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39
I0516 00:39:50.830884  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.830951  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-39 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.832411  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (1.131639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0516 00:39:50.833236  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39/status: (2.045308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53884]
I0516 00:39:50.833248  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.672887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53886]
I0516 00:39:50.834865  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-39: (1.021479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53886]
I0516 00:39:50.835151  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.835308  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:50.835324  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40
I0516 00:39:50.835402  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.835444  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-40 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.837636  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.711025ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0516 00:39:50.837656  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40/status: (2.008371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53886]
I0516 00:39:50.837800  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (1.524276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53888]
I0516 00:39:50.839114  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-40: (1.067605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53886]
I0516 00:39:50.839396  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.839613  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:50.839631  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41
I0516 00:39:50.839732  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.839777  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-41 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.841061  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (1.045668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0516 00:39:50.841994  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.532187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53890]
I0516 00:39:50.842484  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41/status: (2.467821ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53888]
I0516 00:39:50.844205  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-41: (1.149861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53890]
I0516 00:39:50.844512  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.844719  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:50.844737  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42
I0516 00:39:50.844834  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.844879  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-42 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.846568  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (1.364409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53890]
I0516 00:39:50.847095  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42/status: (1.90705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0516 00:39:50.847456  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.396512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53892]
I0516 00:39:50.849148  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-42: (982.12µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0516 00:39:50.849469  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.849650  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43
I0516 00:39:50.849696  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43
I0516 00:39:50.849885  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.849954  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-43 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.851318  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (1.163595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0516 00:39:50.852024  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43/status: (1.650159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53894]
I0516 00:39:50.853408  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-43: (1.061745ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53894]
I0516 00:39:50.853798  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.599722ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53890]
I0516 00:39:50.854108  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.854267  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44
I0516 00:39:50.854285  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44
I0516 00:39:50.854382  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.854420  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-44 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.855988  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (1.15881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0516 00:39:50.856551  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44/status: (1.90076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53894]
I0516 00:39:50.857306  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.667521ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53896]
I0516 00:39:50.858000  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-44: (919.753µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53894]
I0516 00:39:50.858278  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.858443  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45
I0516 00:39:50.858458  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45
I0516 00:39:50.858571  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.858616  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-45 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.859700  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (859.768µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0516 00:39:50.860242  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45/status: (1.418211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53896]
I0516 00:39:50.861033  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.934069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53898]
I0516 00:39:50.862282  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-45: (1.450116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53896]
I0516 00:39:50.862569  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.862734  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:50.862752  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:50.862965  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.863080  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.865524  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46/status: (2.192759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53898]
I0516 00:39:50.865546  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.586553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:50.866485  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (3.049703ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
E0516 00:39:50.866765  108888 factory.go:686] pod is already present in the activeQ
I0516 00:39:50.867095  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (995.102µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53898]
I0516 00:39:50.867399  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.867615  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:50.867635  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19
I0516 00:39:50.867739  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.867786  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-19 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.870391  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (1.9584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:50.870396  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-19.159f02e7c6e59651: (1.901919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53902]
I0516 00:39:50.870805  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (2.333772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53882]
I0516 00:39:50.871095  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.871321  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47
I0516 00:39:50.871341  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47
I0516 00:39:50.871436  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.871477  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-47 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.873086  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (1.053466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:50.873652  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47/status: (1.932764ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53902]
I0516 00:39:50.875737  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.695085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53904]
I0516 00:39:50.875766  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-47: (1.745297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53902]
I0516 00:39:50.876390  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.876674  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48
I0516 00:39:50.876694  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48
I0516 00:39:50.876807  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.876866  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-48 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.878789  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (1.578326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:50.879074  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (1.299913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53906]
I0516 00:39:50.879383  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48/status: (2.20588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53904]
I0516 00:39:50.880823  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-48: (1.004671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53906]
I0516 00:39:50.881184  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.881340  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:50.881358  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49
I0516 00:39:50.881466  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.881514  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-49 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.882825  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (1.006ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:50.883364  108888 wrap.go:47] PUT /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49/status: (1.605887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53906]
I0516 00:39:50.884301  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (2.043072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:50.884698  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-49: (938.395µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53906]
I0516 00:39:50.885004  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.885162  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:50.885177  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21
I0516 00:39:50.885277  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.885315  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-21 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.886553  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (933.229µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:50.886610  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (1.135928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:50.886862  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.887070  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:50.887090  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46
I0516 00:39:50.887187  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:50.887228  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-46 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:50.888481  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (1.001228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:50.888561  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-46: (1.186465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:50.888610  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-21.159f02e7c7ade0b2: (2.245152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53910]
I0516 00:39:50.888825  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:50.891005  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-46.159f02e7d71b342d: (1.682552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:50.916704  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.918221ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:51.016786  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.939958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:51.116602  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.847932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:51.175900  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:51.177360  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:51.177449  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:51.177490  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:51.178748  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:51.216664  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.906207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:51.316551  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.78578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:51.417644  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (2.322187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:51.516685  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.898908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:51.617099  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (2.265724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:51.716625  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.837446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:51.816618  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.857949ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:51.916524  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.785292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
E0516 00:39:51.954792  108888 factory.go:695] Error getting pod permit-pluginfbb36f84-3285-4ca3-b4b1-f830b43b5b8a/test-pod for retry: Get http://127.0.0.1:39511/api/v1/namespaces/permit-pluginfbb36f84-3285-4ca3-b4b1-f830b43b5b8a/pods/test-pod: dial tcp 127.0.0.1:39511: connect: connection refused; retrying...
I0516 00:39:52.016621  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (1.859516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:52.072987  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod
I0516 00:39:52.073024  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod
I0516 00:39:52.073207  108888 scheduler_binder.go:256] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod", node "node1"
I0516 00:39:52.073222  108888 scheduler_binder.go:266] AssumePodVolumes for pod "preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod", node "node1": all PVCs bound and nothing to do
I0516 00:39:52.073265  108888 factory.go:711] Attempting to bind preemptor-pod to node1
I0516 00:39:52.074010  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:52.074039  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2
I0516 00:39:52.074152  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.074192  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-2 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.080390  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-2.159f02e7bd4d4f85: (5.163179ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0516 00:39:52.081244  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (5.09376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53914]
I0516 00:39:52.081464  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (4.994711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:52.081781  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.083398  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod/binding: (9.724187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53908]
I0516 00:39:52.083693  108888 scheduler.go:589] pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/preemptor-pod is bound successfully on node node1, 1 nodes evaluated, 1 nodes were found feasible
I0516 00:39:52.085455  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:52.085478  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3
I0516 00:39:52.085620  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.085666  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-3 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.087471  108888 wrap.go:47] POST /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events: (3.551379ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0516 00:39:52.091075  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (4.374723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:52.091333  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (5.057355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53918]
I0516 00:39:52.092225  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.092394  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:52.092405  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4
I0516 00:39:52.092486  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.092523  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-4 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.094036  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-3.159f02e7bdcb08c4: (5.859964ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0516 00:39:52.101594  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-4.159f02e7be4c0de8: (6.977253ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0516 00:39:52.102422  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (9.41503ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:52.102769  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.103073  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:52.103088  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6
I0516 00:39:52.103196  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.103236  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-6 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.107656  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-6.159f02e7bf28779b: (3.421308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0516 00:39:52.110337  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (5.193471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53916]
I0516 00:39:52.110604  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (5.123799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:52.110826  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.111404  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:52.111428  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8
I0516 00:39:52.111543  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.111582  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-8 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.116380  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (2.921903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0516 00:39:52.116650  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (3.080129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:52.117424  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.118414  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:52.118440  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0
I0516 00:39:52.118546  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.118582  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-0 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.120211  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-8.159f02e7c01a8cb0: (7.711143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53922]
I0516 00:39:52.123947  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (4.647668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0516 00:39:52.124222  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/preemptor-pod: (6.385438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0516 00:39:52.124456  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (4.80736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:52.125286  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.125505  108888 preemption_test.go:583] Check unschedulable pods still exists and were never scheduled...
I0516 00:39:52.126408  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:52.126424  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9
I0516 00:39:52.126519  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.126563  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-9 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.127646  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-0.159f02e7bc3c610d: (6.530768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53922]
I0516 00:39:52.131688  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (4.780615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:52.132149  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (4.93077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0516 00:39:52.132525  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-0: (6.717328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0516 00:39:52.133520  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.134345  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:52.134362  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1
I0516 00:39:52.134469  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.134506  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-1 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.135667  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-9.159f02e7c104e347: (7.424429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53922]
I0516 00:39:52.136758  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (2.940742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0516 00:39:52.141104  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (3.683236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53900]
I0516 00:39:52.141350  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-1: (4.458807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0516 00:39:52.141550  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-2: (2.663849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0516 00:39:52.142110  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-1.159f02e7bcc488f7: (4.028925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53922]
I0516 00:39:52.142805  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.143570  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:52.143585  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11
I0516 00:39:52.143683  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.143719  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-11 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.148411  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-11.159f02e7c26671d9: (3.727676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53926]
I0516 00:39:52.148988  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (3.535907ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53922]
I0516 00:39:52.149325  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-3: (6.301219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0516 00:39:52.149618  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (3.870688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0516 00:39:52.150228  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.151406  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:52.151420  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12
I0516 00:39:52.151519  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.151566  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-12 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.156360  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-12.159f02e7c3030367: (3.857506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53928]
I0516 00:39:52.156774  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (3.484952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0516 00:39:52.157084  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (6.109796ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0516 00:39:52.157297  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (3.682514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53922]
I0516 00:39:52.157679  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.158929  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:52.158944  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13
I0516 00:39:52.159044  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.159080  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-13 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.160726  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (2.360661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0516 00:39:52.162943  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-13.159f02e7c3473f6f: (1.887776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53930]
I0516 00:39:52.164154  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-6: (1.125533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53920]
I0516 00:39:52.164395  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (2.552166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53922]
I0516 00:39:52.164646  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (2.458606ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0516 00:39:52.164879  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.165487  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:52.165509  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14
I0516 00:39:52.165630  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.165673  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-14 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.167689  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-4: (74.880744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53918]
I0516 00:39:52.168832  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (2.821469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53930]
I0516 00:39:52.169112  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-14.159f02e7c4359608: (2.382295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0516 00:39:52.169246  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (3.730551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53924]
I0516 00:39:52.169464  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.169547  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (3.130249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53932]
I0516 00:39:52.169952  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:52.169966  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16
I0516 00:39:52.170065  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.170105  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-16 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.173734  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-16.159f02e7c4f3da68: (2.962873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53938]
I0516 00:39:52.174776  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-8: (4.618198ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0516 00:39:52.174799  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (3.184684ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53936]
I0516 00:39:52.175664  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (5.002774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53918]
I0516 00:39:52.176063  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.179145  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:52.179263  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:52.179435  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:52.179480  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17
I0516 00:39:52.179658  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.179953  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-17 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.184216  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:52.184255  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:52.185071  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-17.159f02e7c56d2ac3: (3.729021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53940]
I0516 00:39:52.185667  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (2.643085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53934]
I0516 00:39:52.185683  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (3.211582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53938]
I0516 00:39:52.186038  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.186213  108888 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0516 00:39:52.186311  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:52.186328  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18
I0516 00:39:52.186480  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.186618  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-18 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.190651  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (3.148718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53938]
I0516 00:39:52.191086  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-18.159f02e7c6052db6: (3.203307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53942]
I0516 00:39:52.191209  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (3.806769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53940]
I0516 00:39:52.191326  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-9: (2.513757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0516 00:39:52.191598  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.191955  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:52.191978  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5
I0516 00:39:52.192081  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.192127  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-5 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.194073  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (1.435752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53946]
I0516 00:39:52.194239  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (1.987532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53944]
I0516 00:39:52.194319  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-5: (1.674209ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53938]
I0516 00:39:52.194563  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.195241  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:52.195278  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20
I0516 00:39:52.195383  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.195418  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-20 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.197610  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (1.958673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53938]
I0516 00:39:52.197845  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-5.159f02e7bec6611b: (3.426872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53942]
I0516 00:39:52.197866  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (2.25991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53946]
I0516 00:39:52.198109  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.198255  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:52.198277  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22
I0516 00:39:52.198380  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.198422  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-22 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.201479  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.808038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53946]
I0516 00:39:52.201746  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-11: (2.568492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0516 00:39:52.201992  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (1.994919ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53938]
I0516 00:39:52.202188  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.204052  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-12: (1.543848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0516 00:39:52.204403  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-20.159f02e7c74b47eb: (3.213206ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53942]
I0516 00:39:52.204988  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:52.205659  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7
I0516 00:39:52.205845  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.205939  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-7 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.206005  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-13: (1.295367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0516 00:39:52.208224  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (2.082705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53938]
I0516 00:39:52.209427  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-7: (1.721644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53950]
I0516 00:39:52.209807  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-14: (3.053261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53948]
I0516 00:39:52.210009  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.210188  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-22.159f02e7c81f4f46: (5.097762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53942]
I0516 00:39:52.210776  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:52.210828  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23
I0516 00:39:52.211416  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.211615  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-23 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.212060  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (1.666704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53950]
I0516 00:39:52.213516  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-7.159f02e7bf96feaa: (2.238865ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53938]
I0516 00:39:52.215042  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-16: (1.360888ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53956]
I0516 00:39:52.215261  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.988548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53950]
I0516 00:39:52.215492  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.215786  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:52.215801  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24
I0516 00:39:52.215903  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.215954  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-24 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.218989  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-23.159f02e7c897fbf9: (4.743783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53938]
I0516 00:39:52.219308  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (2.618494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53954]
I0516 00:39:52.219464  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (2.414584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0516 00:39:52.219648  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (6.922618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0516 00:39:52.220424  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.220651  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:52.220668  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25
I0516 00:39:52.220770  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.220807  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-25 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.222663  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (1.376254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53954]
I0516 00:39:52.222963  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (1.880312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0516 00:39:52.224056  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-17: (8.389619ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53956]
I0516 00:39:52.224072  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-24.159f02e7c8e13d94: (2.319223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53960]
I0516 00:39:52.224612  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.224868  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:52.224904  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26
I0516 00:39:52.225023  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.225084  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-26 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.229047  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-25.159f02e7c930a05d: (3.666991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0516 00:39:52.229339  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (3.783589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0516 00:39:52.229874  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.230350  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:52.230370  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27
I0516 00:39:52.230474  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.230511  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-27 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.230867  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-18: (1.387009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0516 00:39:52.229063  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (2.842847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53962]
I0516 00:39:52.232634  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (1.613166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0516 00:39:52.232972  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.233677  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-19: (1.278323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53962]
I0516 00:39:52.234190  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (3.006214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0516 00:39:52.234518  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:52.234588  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28
I0516 00:39:52.234721  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.234780  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-28 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.236799  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (1.448589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0516 00:39:52.236944  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-26.159f02e7c98ad025: (2.285656ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0516 00:39:52.237105  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-20: (3.012266ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53958]
I0516 00:39:52.237355  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (2.257286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0516 00:39:52.237555  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.237849  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:52.237997  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10
I0516 00:39:52.238088  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.238127  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-10 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.240940  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (2.449099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0516 00:39:52.241238  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-10: (1.551819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0516 00:39:52.241574  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-21: (2.878957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53964]
I0516 00:39:52.241586  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-27.159f02e7c9e033bc: (2.49352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53968]
I0516 00:39:52.241835  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.243398  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-22: (956.449µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0516 00:39:52.243742  108888 cacher.go:739] cacher (*core.Event): 1 objects queued in incoming channel.
I0516 00:39:52.245201  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-23: (1.371327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0516 00:39:52.245488  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-28.159f02e7cce24327: (2.884115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0516 00:39:52.246519  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:52.246573  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29
I0516 00:39:52.246702  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.246791  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-29 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.248034  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-24: (2.472987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0516 00:39:52.251203  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (2.719881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0516 00:39:52.251789  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (4.355826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0516 00:39:52.252139  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-25: (3.142605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0516 00:39:52.252637  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-10.159f02e7c1a9d932: (4.624676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53952]
I0516 00:39:52.265287  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.265610  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:52.265628  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30
I0516 00:39:52.265715  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.265757  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-30 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.268996  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-26: (15.995054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0516 00:39:52.269391  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-29.159f02e7ce48e1aa: (3.238064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0516 00:39:52.269437  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (3.409293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0516 00:39:52.269770  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (3.160871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0516 00:39:52.270067  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.271345  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-27: (1.789032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53974]
I0516 00:39:52.271879  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-30.159f02e7cf1320d2: (1.906333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0516 00:39:52.272888  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:52.272932  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31
I0516 00:39:52.273086  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.273133  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-31 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.273165  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-28: (1.349592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0516 00:39:52.275501  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-31.159f02e7cfbb06ef: (1.630702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0516 00:39:52.276383  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (2.974392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53966]
I0516 00:39:52.276517  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (3.138476ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0516 00:39:52.276668  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-29: (2.701554ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53978]
I0516 00:39:52.277204  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.278205  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-30: (1.088081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0516 00:39:52.278505  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:52.278715  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32
I0516 00:39:52.279816  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-31: (1.204286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0516 00:39:52.281178  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.281279  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-32 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.281795  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (1.561552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0516 00:39:52.285027  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (3.031399ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0516 00:39:52.285323  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.835327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0516 00:39:52.285329  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-32: (2.75552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53980]
I0516 00:39:52.285758  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.286045  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:52.286060  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33
I0516 00:39:52.286162  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.286198  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-33 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.288049  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (1.669956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53980]
I0516 00:39:52.288292  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (2.617398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0516 00:39:52.288482  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-33: (2.031438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0516 00:39:52.289150  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.289834  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:52.289886  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34
I0516 00:39:52.290021  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.290087  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-34 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.293785  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (3.403611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0516 00:39:52.294106  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (4.728811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0516 00:39:52.294488  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-34: (3.794077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53980]
I0516 00:39:52.294986  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.295250  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:52.295273  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35
I0516 00:39:52.295395  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.295439  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-35 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.297706  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (1.928581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0516 00:39:52.297748  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (2.064613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0516 00:39:52.298025  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-35: (1.812074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53980]
I0516 00:39:52.298861  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.299027  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:52.299040  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36
I0516 00:39:52.299135  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.299170  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-36 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.299542  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (1.321364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0516 00:39:52.301623  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (2.11148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53980]
I0516 00:39:52.302652  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-36: (2.988447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0516 00:39:52.302948  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.303154  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:52.303193  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37
I0516 00:39:52.303366  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.303447  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-37 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.303504  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-38: (3.292762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53976]
I0516 00:39:52.305123  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (1.357398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53970]
I0516 00:39:52.305225  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-37: (1.444997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53980]
I0516 00:39:52.305482  108888 generic_scheduler.go:1152] Node node1 is a potential node for preemption.
I0516 00:39:52.305741  108888 wrap.go:47] PATCH /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/events/ppod-32.159f02e7d0cf3e3d: (14.422568ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53982]
I0516 00:39:52.305774  108888 scheduling_queue.go:795] About to try and schedule pod preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:52.305797  108888 scheduler.go:452] Attempting to schedule pod: preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15
I0516 00:39:52.305944  108888 factory.go:649] Unable to schedule preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0516 00:39:52.305990  108888 factory.go:720] Updating pod condition for preemption-race6285e400-528e-4217-8497-a7dfe793aee9/ppod-15 to (PodScheduled==False, Reason=Unschedulable)
I0516 00:39:52.307464  108888 wrap.go:47] GET /api/v1/namespaces/preemption-race6285e400-528e-4217-8497-a7dfe793aee9/pods/ppod-15: (1.319556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53980]
I0516 00:39:52.307