This job view page is being replaced by Spyglass soon. Check out the new job view.
PRwojtek-t: [WIP][DO NOT REVIEW] Deprecate SelfLink field
ResultFAILURE
Tests 5 failed / 2466 succeeded
Started2019-08-14 08:18
Elapsed26m8s
Revision
Buildergke-prow-ssd-pool-1a225945-4sp7
Refs master:1f6cb3cb
80640:8d56c832
pod0807a07f-be6c-11e9-8926-5e2f786d826e
infra-commit89e6e9743
pod0807a07f-be6c-11e9-8926-5e2f786d826e
repok8s.io/kubernetes
repo-commit053f654c8c65d88d984afaee1b9b63038846f53a
repos{u'k8s.io/kubernetes': u'master:1f6cb3cb9def97320a5412dcbea1661edd95c29e,80640:8d56c8324d783f2172ff1acf35d4df340218010f'}

Test Failures


k8s.io/kubernetes/test/integration/apiserver/apply TestApplyManagedFields 5.35s

go test -v k8s.io/kubernetes/test/integration/apiserver/apply -run TestApplyManagedFields$
=== RUN   TestApplyManagedFields
I0814 08:34:46.696089  104742 feature_gate.go:216] feature gates: &{map[ServerSideApply:true]}
I0814 08:34:46.699059  104742 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 08:34:46.699090  104742 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 08:34:46.699102  104742 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 08:34:46.699123  104742 master.go:234] Using reconciler: 
I0814 08:34:46.701172  104742 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.701288  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.701305  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.701346  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.701408  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.702214  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.702526  104742 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 08:34:46.702653  104742 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.702964  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.703754  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.703952  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.704166  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.704346  104742 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 08:34:46.704678  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.706292  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.707190  104742 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 08:34:46.707511  104742 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.707733  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.707837  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.707963  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.708155  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.708320  104742 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 08:34:46.708798  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.709171  104742 watch_cache.go:405] Replace watchCache (rev: 1037) 
I0814 08:34:46.709679  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.710033  104742 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 08:34:46.710191  104742 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.710379  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.710474  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.710596  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.710781  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.710908  104742 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 08:34:46.711331  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.711611  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.711708  104742 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 08:34:46.711746  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.711784  104742 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 08:34:46.711853  104742 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.711911  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.711921  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.711952  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.711998  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.712576  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.712666  104742 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 08:34:46.712818  104742 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.712875  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.712884  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.712909  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.712949  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.712975  104742 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 08:34:46.713204  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.713519  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.713628  104742 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 08:34:46.713655  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.713690  104742 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 08:34:46.713759  104742 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.713815  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.713826  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.713854  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.713896  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.716830  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.716891  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.717129  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.717251  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.717346  104742 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 08:34:46.717363  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.717413  104742 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 08:34:46.717530  104742 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.717619  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.717629  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.717661  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.717719  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.717969  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.718167  104742 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 08:34:46.718305  104742 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.718369  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.718380  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.718412  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.718464  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.718492  104742 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 08:34:46.718693  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.718918  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.719009  104742 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 08:34:46.719145  104742 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.719197  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.719208  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.719243  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.719287  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.719312  104742 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 08:34:46.719489  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.719723  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.719806  104742 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 08:34:46.719904  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.719947  104742 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 08:34:46.720100  104742 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.720181  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.720191  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.720229  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.720272  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.720570  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.720701  104742 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 08:34:46.720945  104742 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.721183  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.721380  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.721650  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.721899  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.721972  104742 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 08:34:46.722316  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.722623  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.725625  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.725700  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.725704  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.725882  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.726125  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.726139  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.726342  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.726452  104742 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 08:34:46.726578  104742 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.726653  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.726663  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.726691  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.726716  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.726766  104742 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 08:34:46.726912  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.727205  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.727298  104742 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 08:34:46.727304  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.727343  104742 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 08:34:46.727445  104742 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.727511  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.727521  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.727550  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.727596  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.727898  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.728001  104742 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 08:34:46.728043  104742 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.728141  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.728152  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.728180  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.728233  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.728261  104742 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 08:34:46.728422  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.728826  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.729248  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.729307  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.729417  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.729430  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.729485  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.729537  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.729697  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.730121  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.730207  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.730374  104742 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.730439  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.730481  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.730493  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.730553  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.730645  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.731721  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.731828  104742 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 08:34:46.732239  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.732293  104742 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 08:34:46.734127  104742 watch_cache.go:405] Replace watchCache (rev: 1039) 
I0814 08:34:46.854603  104742 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.855913  104742 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.857574  104742 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.859226  104742 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.860632  104742 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.862127  104742 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.863460  104742 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.864687  104742 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.865847  104742 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.867285  104742 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.869268  104742 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.870567  104742 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.872907  104742 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.874495  104742 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.908595  104742 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.914070  104742 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.921864  104742 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.926414  104742 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.932988  104742 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.934594  104742 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.946740  104742 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.951692  104742 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.953358  104742 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.958951  104742 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.960804  104742 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.962996  104742 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.965437  104742 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.967085  104742 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.968720  104742 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.970863  104742 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.972773  104742 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.975234  104742 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.977461  104742 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.979430  104742 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.981520  104742 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.983322  104742 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.983619  104742 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 08:34:46.983789  104742 master.go:434] Enabling API group "authentication.k8s.io".
I0814 08:34:46.983948  104742 master.go:434] Enabling API group "authorization.k8s.io".
I0814 08:34:46.984244  104742 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.984516  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.984642  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.984830  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.985074  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.985608  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.985935  104742 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 08:34:46.986118  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.986198  104742 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 08:34:46.986235  104742 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.986300  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.986309  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.986339  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.986438  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.986759  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.986902  104742 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 08:34:46.987073  104742 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.987149  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.987161  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.987195  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.987238  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.987276  104742 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 08:34:46.987525  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.987902  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.987980  104742 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 08:34:46.987990  104742 master.go:434] Enabling API group "autoscaling".
I0814 08:34:46.988250  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.988318  104742 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 08:34:46.989538  104742 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.989616  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.989627  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.989667  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.989732  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.990064  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.990203  104742 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 08:34:46.990334  104742 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.990405  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.990415  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.990445  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.990494  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.990550  104742 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 08:34:46.990790  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.991097  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.991212  104742 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 08:34:46.991230  104742 master.go:434] Enabling API group "batch".
I0814 08:34:46.991339  104742 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.991404  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.991414  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.991444  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.991494  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.991525  104742 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 08:34:46.991684  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.992143  104742 watch_cache.go:405] Replace watchCache (rev: 1065) 
I0814 08:34:46.992277  104742 watch_cache.go:405] Replace watchCache (rev: 1064) 
I0814 08:34:46.992527  104742 watch_cache.go:405] Replace watchCache (rev: 1065) 
I0814 08:34:46.992595  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.992682  104742 watch_cache.go:405] Replace watchCache (rev: 1065) 
I0814 08:34:46.992697  104742 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 08:34:46.992721  104742 master.go:434] Enabling API group "certificates.k8s.io".
I0814 08:34:46.992752  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.992787  104742 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 08:34:46.992848  104742 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.992920  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.992941  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.992974  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.994271  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.994274  104742 watch_cache.go:405] Replace watchCache (rev: 1065) 
I0814 08:34:46.994436  104742 watch_cache.go:405] Replace watchCache (rev: 1065) 
I0814 08:34:46.994538  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.994595  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.994627  104742 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 08:34:46.994700  104742 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 08:34:46.994740  104742 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.994804  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.994814  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.994848  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.994962  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.995246  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.995337  104742 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 08:34:46.995350  104742 master.go:434] Enabling API group "coordination.k8s.io".
I0814 08:34:46.995475  104742 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.995540  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.995551  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.995585  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.995632  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.995669  104742 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 08:34:46.995818  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:46.995869  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.996203  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.996309  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.996333  104742 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 08:34:46.996363  104742 master.go:434] Enabling API group "extensions".
I0814 08:34:46.996489  104742 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.996562  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.996571  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.996606  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.996650  104742 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 08:34:46.996855  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.997235  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.997347  104742 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 08:34:46.997480  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.997468  104742 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.997499  104742 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 08:34:46.997533  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.997543  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.997570  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.997619  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.997888  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.997991  104742 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 08:34:46.998007  104742 master.go:434] Enabling API group "networking.k8s.io".
I0814 08:34:46.998052  104742 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.998106  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.998116  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.998159  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.998215  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.998266  104742 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 08:34:46.998429  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.999551  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:46.999572  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:46.999642  104742 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 08:34:46.999659  104742 master.go:434] Enabling API group "node.k8s.io".
I0814 08:34:46.999660  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:46.999792  104742 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:46.999856  104742 client.go:354] parsed scheme: ""
I0814 08:34:46.999864  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:46.999866  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:46.999898  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:46.999944  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:46.999977  104742 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 08:34:47.000001  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.000195  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.000957  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.001078  104742 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 08:34:47.001185  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.001188  104742 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.001232  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.001239  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.001248  104742 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 08:34:47.001259  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.001361  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.002466  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.002562  104742 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 08:34:47.002573  104742 master.go:434] Enabling API group "policy".
I0814 08:34:47.002601  104742 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.002657  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.002667  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.002696  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.002737  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.002766  104742 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 08:34:47.002875  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.002961  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.002998  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.003196  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.003285  104742 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 08:34:47.003310  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.003366  104742 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 08:34:47.003402  104742 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.003463  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.003473  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.003499  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.003539  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.004129  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.004140  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.004252  104742 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 08:34:47.004284  104742 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.004367  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.004377  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.004410  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.004451  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.004486  104742 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 08:34:47.004572  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.004680  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.004951  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.005052  104742 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 08:34:47.005150  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.005183  104742 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.005205  104742 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 08:34:47.005250  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.005260  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.005289  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.005347  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.006084  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.006175  104742 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 08:34:47.006207  104742 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.006250  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.006257  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.006276  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.006309  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.006327  104742 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 08:34:47.006436  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.006547  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.006591  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.006610  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.006639  104742 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 08:34:47.006740  104742 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 08:34:47.006734  104742 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.006863  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.006874  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.006902  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.006941  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.008419  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.008434  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.008458  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.008461  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.008607  104742 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 08:34:47.008619  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.008635  104742 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.008691  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.008702  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.008730  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.008784  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.008827  104742 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 08:34:47.009051  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.009078  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.009134  104742 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 08:34:47.009991  104742 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 08:34:47.010136  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.010161  104742 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.010237  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.010247  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.010277  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.010315  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.010535  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.010605  104742 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 08:34:47.010682  104742 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 08:34:47.010904  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.011039  104742 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 08:34:47.011065  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.012342  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.014358  104742 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.014454  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.014555  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.014607  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.014689  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.017214  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.017260  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.017323  104742 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 08:34:47.017391  104742 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 08:34:47.017449  104742 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.017514  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.017524  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.017554  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.017604  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.017984  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.018116  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.018165  104742 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 08:34:47.018183  104742 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 08:34:47.018249  104742 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 08:34:47.018312  104742 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 08:34:47.018448  104742 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.018508  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.018517  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.018546  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.018593  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.018851  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.018949  104742 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 08:34:47.019132  104742 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.019210  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.019221  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.019251  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.019336  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.019368  104742 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 08:34:47.019582  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.019820  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.019914  104742 watch_cache.go:405] Replace watchCache (rev: 1066) 
I0814 08:34:47.020177  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.020280  104742 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 08:34:47.020315  104742 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.020369  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.020378  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.020410  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.020460  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.020487  104742 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 08:34:47.020667  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.020910  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.020977  104742 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 08:34:47.020996  104742 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.021114  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.021124  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.021150  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.021179  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.021200  104742 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 08:34:47.021366  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.022679  104742 watch_cache.go:405] Replace watchCache (rev: 1067) 
I0814 08:34:47.022682  104742 watch_cache.go:405] Replace watchCache (rev: 1067) 
I0814 08:34:47.022814  104742 watch_cache.go:405] Replace watchCache (rev: 1067) 
I0814 08:34:47.022964  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.023082  104742 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 08:34:47.023175  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.023228  104742 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 08:34:47.023210  104742 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.023289  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.023304  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.023332  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.023403  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.023717  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.023835  104742 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 08:34:47.023967  104742 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.024060  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.024071  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.024102  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.024165  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.024211  104742 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 08:34:47.024445  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.025736  104742 watch_cache.go:405] Replace watchCache (rev: 1067) 
I0814 08:34:47.025776  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.025876  104742 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 08:34:47.025892  104742 master.go:434] Enabling API group "storage.k8s.io".
I0814 08:34:47.025980  104742 watch_cache.go:405] Replace watchCache (rev: 1067) 
I0814 08:34:47.026034  104742 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.026104  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.026115  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.026153  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.026202  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.026239  104742 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 08:34:47.026254  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.027341  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.027396  104742 watch_cache.go:405] Replace watchCache (rev: 1067) 
I0814 08:34:47.027470  104742 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 08:34:47.027563  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.027592  104742 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.028109  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.028284  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.028530  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.027602  104742 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 08:34:47.028931  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.029446  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.029569  104742 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 08:34:47.029705  104742 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.029768  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.029778  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.029998  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.030105  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.030107  104742 watch_cache.go:405] Replace watchCache (rev: 1067) 
I0814 08:34:47.030137  104742 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 08:34:47.030320  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.030561  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.030684  104742 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 08:34:47.030795  104742 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.030852  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.030861  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.031373  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.031521  104742 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 08:34:47.031532  104742 watch_cache.go:405] Replace watchCache (rev: 1067) 
I0814 08:34:47.032335  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.032413  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.033905  104742 watch_cache.go:405] Replace watchCache (rev: 1067) 
I0814 08:34:47.034259  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.034374  104742 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 08:34:47.034479  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.034513  104742 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.034576  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.034588  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.034618  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.034623  104742 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 08:34:47.034674  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.034957  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.035105  104742 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 08:34:47.035123  104742 master.go:434] Enabling API group "apps".
I0814 08:34:47.035154  104742 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.035225  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.035236  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.035269  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.035302  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.035333  104742 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 08:34:47.035556  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.036151  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.036257  104742 watch_cache.go:405] Replace watchCache (rev: 1068) 
I0814 08:34:47.036265  104742 watch_cache.go:405] Replace watchCache (rev: 1068) 
I0814 08:34:47.036458  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.037151  104742 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 08:34:47.037177  104742 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 08:34:47.037183  104742 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.037235  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.037242  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.037262  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.037348  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.037711  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.037800  104742 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 08:34:47.037828  104742 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.037889  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.037898  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.037930  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.037988  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.038041  104742 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 08:34:47.038366  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.038648  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.038872  104742 watch_cache.go:405] Replace watchCache (rev: 1068) 
I0814 08:34:47.039189  104742 watch_cache.go:405] Replace watchCache (rev: 1068) 
I0814 08:34:47.039533  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.040088  104742 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 08:34:47.040122  104742 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.040181  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.040192  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.040188  104742 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 08:34:47.040223  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.040293  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.040547  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.040615  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.040649  104742 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 08:34:47.040663  104742 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 08:34:47.040691  104742 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.040712  104742 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 08:34:47.040848  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.040859  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.040887  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.040925  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.041191  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.041395  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.041523  104742 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 08:34:47.041537  104742 master.go:434] Enabling API group "events.k8s.io".
I0814 08:34:47.042563  104742 watch_cache.go:405] Replace watchCache (rev: 1068) 
I0814 08:34:47.042581  104742 watch_cache.go:405] Replace watchCache (rev: 1068) 
I0814 08:34:47.042760  104742 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 08:34:47.043466  104742 watch_cache.go:405] Replace watchCache (rev: 1068) 
I0814 08:34:47.242696  104742 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.245381  104742 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.248256  104742 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.251505  104742 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.253963  104742 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.256551  104742 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.259201  104742 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.261713  104742 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.269183  104742 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.272786  104742 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.275976  104742 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.278581  104742 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.281383  104742 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.284166  104742 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.287149  104742 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.290756  104742 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.294097  104742 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.296152  104742 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.298128  104742 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.300695  104742 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:34:47.300851  104742 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 08:34:47.303697  104742 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.306127  104742 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.308795  104742 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.313737  104742 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.316549  104742 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.319437  104742 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.321724  104742 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.325672  104742 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.328498  104742 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.331314  104742 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.333989  104742 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:34:47.334207  104742 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 08:34:47.336758  104742 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.339047  104742 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.341725  104742 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.345696  104742 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.348313  104742 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.351887  104742 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.354592  104742 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.357278  104742 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.360010  104742 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.362597  104742 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.365300  104742 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:34:47.365496  104742 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 08:34:47.368359  104742 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.372103  104742 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:34:47.372290  104742 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 08:34:47.374329  104742 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.376284  104742 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.378152  104742 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.380602  104742 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.383602  104742 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.385790  104742 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.387954  104742 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:34:47.388181  104742 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 08:34:47.391729  104742 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.394480  104742 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.396953  104742 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.399161  104742 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.401258  104742 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.403187  104742 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.405616  104742 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.407885  104742 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.411200  104742 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.414923  104742 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.417295  104742 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.419841  104742 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:34:47.420076  104742 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 08:34:47.420202  104742 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 08:34:47.424064  104742 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.426629  104742 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.429290  104742 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.431656  104742 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.434214  104742 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"798a340f-8a55-47e0-af9e-5db80cc1e063", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:34:47.697048  104742 client.go:354] parsed scheme: ""
I0814 08:34:47.697081  104742 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:34:47.697134  104742 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:34:47.697196  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:47.697642  104742 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:34:47.697705  104742 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:34:48.629479  104742 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.069606ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38590]
I0814 08:34:48.629740  104742 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 08:34:48.629771  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:48.629786  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:48.629796  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:48.629804  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:48.629834  104742 httplog.go:90] GET /healthz: (1.444282ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:48.631902  104742 httplog.go:90] GET /api/v1/services: (987.682µs) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:48.635442  104742 httplog.go:90] GET /api/v1/services: (1.001901ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:48.639557  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.596682ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38590]
I0814 08:34:48.641152  104742 httplog.go:90] GET /api/v1/services: (950.368µs) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38592]
I0814 08:34:48.641276  104742 httplog.go:90] GET /api/v1/services: (1.061227ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38590]
I0814 08:34:48.642972  104742 httplog.go:90] POST /api/v1/namespaces: (2.821748ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38594]
I0814 08:34:48.644114  104742 httplog.go:90] GET /api/v1/namespaces/kube-public: (775.79µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38592]
I0814 08:34:48.644317  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:48.644335  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:48.644345  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:48.644354  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:48.644380  104742 httplog.go:90] GET /healthz: (6.475124ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:48.646738  104742 httplog.go:90] POST /api/v1/namespaces: (1.67463ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:48.648265  104742 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.053028ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:48.650258  104742 httplog.go:90] POST /api/v1/namespaces: (1.721191ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:48.731366  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:48.731405  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:48.731416  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:48.731424  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:48.731460  104742 httplog.go:90] GET /healthz: (1.067625ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:48.745734  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:48.745771  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:48.745782  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:48.745790  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:48.745834  104742 httplog.go:90] GET /healthz: (957.516µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:48.831434  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:48.831469  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:48.831481  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:48.831489  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:48.831531  104742 httplog.go:90] GET /healthz: (1.118517ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:48.846268  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:48.846291  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:48.846299  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:48.846304  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:48.846339  104742 httplog.go:90] GET /healthz: (1.504394ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:48.931604  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:48.931636  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:48.931647  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:48.931658  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:48.931698  104742 httplog.go:90] GET /healthz: (1.09648ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:48.945829  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:48.945858  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:48.945869  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:48.945887  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:48.945922  104742 httplog.go:90] GET /healthz: (1.069198ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.031748  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.031780  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.031791  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.031800  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.031834  104742 httplog.go:90] GET /healthz: (1.459431ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:49.045979  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.046009  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.046037  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.046045  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.046107  104742 httplog.go:90] GET /healthz: (1.005834ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.131580  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.131610  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.131620  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.131653  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.131693  104742 httplog.go:90] GET /healthz: (1.277504ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:49.145980  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.146011  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.146042  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.146051  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.146086  104742 httplog.go:90] GET /healthz: (1.217508ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.232057  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.232088  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.232098  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.232107  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.232143  104742 httplog.go:90] GET /healthz: (1.516114ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:49.245796  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.245823  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.245835  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.245843  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.245890  104742 httplog.go:90] GET /healthz: (1.010011ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.331474  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.331506  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.331518  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.331527  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.331567  104742 httplog.go:90] GET /healthz: (1.134413ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:49.359268  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.359302  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.359313  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.359322  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.359364  104742 httplog.go:90] GET /healthz: (4.203567ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.433357  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.433388  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.433399  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.433407  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.433446  104742 httplog.go:90] GET /healthz: (1.066039ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:49.445675  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.445702  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.445712  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.445719  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.445753  104742 httplog.go:90] GET /healthz: (873.178µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.531446  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.531479  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.531490  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.531499  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.531548  104742 httplog.go:90] GET /healthz: (1.16195ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:49.546202  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.546241  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.546252  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.546261  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.546306  104742 httplog.go:90] GET /healthz: (1.416417ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.631704  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.631739  104742 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:34:49.631750  104742 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:34:49.631759  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:34:49.631791  104742 httplog.go:90] GET /healthz: (1.028961ms) 0 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:49.632862  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.714769ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38596]
I0814 08:34:49.633100  104742 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.677061ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.635333  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.578878ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38602]
I0814 08:34:49.637547  104742 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (3.805316ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38596]
I0814 08:34:49.637764  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.983637ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38602]
I0814 08:34:49.637948  104742 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (3.823165ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.638753  104742 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 08:34:49.640594  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (2.222181ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38602]
I0814 08:34:49.641198  104742 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.928253ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38596]
I0814 08:34:49.641363  104742 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.471939ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.642340  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (683.3µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38602]
I0814 08:34:49.643586  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (958.891µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38602]
I0814 08:34:49.644123  104742 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.437214ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.644296  104742 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 08:34:49.644310  104742 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 08:34:49.645126  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.107007ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38602]
I0814 08:34:49.645470  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.645490  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:49.645516  104742 httplog.go:90] GET /healthz: (768.879µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.646322  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (736.125µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38602]
I0814 08:34:49.647377  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (747.485µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.648814  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.066042ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.649923  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (699.24µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.652302  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.041244ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.652492  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 08:34:49.653484  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (730.987µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.655778  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.989094ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.656379  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 08:34:49.657382  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (865.872µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.661228  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.549046ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.661408  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 08:34:49.662422  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (887.33µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.664758  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.034524ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.665290  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 08:34:49.666276  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (645.173µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.668414  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.848762ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.668974  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 08:34:49.669957  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (822.133µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.672207  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.935369ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.672404  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 08:34:49.673497  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (978.338µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.676136  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.313561ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.676349  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 08:34:49.677561  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.090141ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.679732  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.873362ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.679930  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 08:34:49.682234  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.862833ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.684969  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.251568ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.685324  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 08:34:49.686933  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (999.429µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.692884  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.598545ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.693205  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 08:34:49.694829  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.482839ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.703241  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.106697ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.703632  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 08:34:49.704885  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (780.54µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.714088  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.494359ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.714585  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 08:34:49.715756  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (820.431µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.718234  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.792283ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.718590  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 08:34:49.719724  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (825.416µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.722305  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.999759ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.723175  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 08:34:49.724262  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (770.959µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.727338  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.879489ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.727684  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 08:34:49.728932  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.07191ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.731352  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.052258ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.731375  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.731588  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:49.731619  104742 httplog.go:90] GET /healthz: (1.193265ms) 0 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:49.731666  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 08:34:49.732861  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (717.691µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.735139  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.52082ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.735457  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 08:34:49.737152  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.415764ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.739353  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.748255ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.739513  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 08:34:49.740559  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (919.044µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.742820  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.910703ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.743218  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 08:34:49.744007  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (620.814µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.745977  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.659742ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.746138  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 08:34:49.746985  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (674.83µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.747348  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.747372  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:49.747399  104742 httplog.go:90] GET /healthz: (1.994767ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.749413  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.031532ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.749754  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 08:34:49.750669  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (735.322µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.753119  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.898809ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.753310  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 08:34:49.754445  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (998.058µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.757265  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.274951ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.763433  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 08:34:49.765335  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.644695ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.769852  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.191699ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.770459  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 08:34:49.771569  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (797.43µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.779577  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.307889ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.779801  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 08:34:49.781558  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.307328ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.784628  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.724616ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.784840  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 08:34:49.786232  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.214146ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.788692  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.116836ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.789216  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 08:34:49.790249  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (853.249µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.793074  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.505355ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.793303  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 08:34:49.796561  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (3.133897ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.801008  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.808656ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.801204  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 08:34:49.802294  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (961.708µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.804617  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.058193ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.804800  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 08:34:49.805826  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (902.376µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.808288  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.147087ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.808483  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 08:34:49.809392  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (742.727µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.811569  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.901984ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.811973  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 08:34:49.812961  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (836.695µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.815470  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.068337ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.815669  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 08:34:49.816592  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (756.177µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.818724  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.835892ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.818912  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 08:34:49.819738  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (678.679µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.822701  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.378279ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.822894  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 08:34:49.824604  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.376595ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.826977  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.100748ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.827280  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 08:34:49.828252  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (750.907µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.830597  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.856436ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.831237  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.831318  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 08:34:49.831360  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:49.831490  104742 httplog.go:90] GET /healthz: (1.015088ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:49.832811  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.347822ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.836620  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.405517ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.836814  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 08:34:49.837865  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (933.136µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.840306  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.127325ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.840483  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 08:34:49.841709  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.111417ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.844399  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.404218ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.844624  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 08:34:49.845816  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.845841  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:49.845867  104742 httplog.go:90] GET /healthz: (1.106417ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.845943  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.167189ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.848769  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.18667ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.849009  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 08:34:49.850047  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (736.049µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.852299  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.936433ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.852529  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 08:34:49.853443  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (749.583µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.855870  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.777821ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.856314  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 08:34:49.857212  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (707.455µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.859616  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.104457ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.860053  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 08:34:49.861057  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (829.382µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.863129  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.777016ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.863395  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 08:34:49.864520  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (991.794µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.866889  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.066968ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.867256  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 08:34:49.868819  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.33521ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.871476  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.903518ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.871623  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 08:34:49.872611  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (834.554µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.876583  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.368223ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.877089  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 08:34:49.878796  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (951.794µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.881322  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.093235ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.881663  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 08:34:49.882663  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (826.831µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.884835  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.839962ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.885157  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 08:34:49.887186  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.627993ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.889389  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.868817ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.889579  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 08:34:49.890522  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (744.537µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.892886  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.009635ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.893194  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 08:34:49.914260  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (4.239796ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.932028  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.932057  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:49.932091  104742 httplog.go:90] GET /healthz: (1.746045ms) 0 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:49.932128  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.500787ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:49.932336  104742 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 08:34:49.946316  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:49.946347  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:49.946385  104742 httplog.go:90] GET /healthz: (1.130146ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.950663  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.042494ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.972413  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.711781ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:49.972853  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 08:34:49.990859  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.121334ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.014932  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.259756ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.015179  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 08:34:50.031481  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.031515  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.031558  104742 httplog.go:90] GET /healthz: (1.267108ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:50.031792  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (2.128998ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.046379  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.046408  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.046444  104742 httplog.go:90] GET /healthz: (1.55579ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.051989  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.348009ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.052322  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 08:34:50.070919  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.251225ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.092172  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.459289ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.092490  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 08:34:50.110796  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.126429ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.132087  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.132125  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.132159  104742 httplog.go:90] GET /healthz: (914.285µs) 0 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:50.132635  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.953897ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.132829  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 08:34:50.145622  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.145648  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.145679  104742 httplog.go:90] GET /healthz: (815.755µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.150577  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (987.364µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.172510  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.692766ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.172785  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 08:34:50.190649  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (962.931µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.212335  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.703104ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.212597  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 08:34:50.232216  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.232232  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.657351ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.232255  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.232291  104742 httplog.go:90] GET /healthz: (1.304187ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:50.246106  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.246145  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.246180  104742 httplog.go:90] GET /healthz: (1.30469ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.251559  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.930541ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.251845  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 08:34:50.270678  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.054264ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.291899  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.234395ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.292159  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 08:34:50.310796  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.15496ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.332485  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.792583ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.332640  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.332659  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.332690  104742 httplog.go:90] GET /healthz: (2.449983ms) 0 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:50.332965  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 08:34:50.345621  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.345648  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.345677  104742 httplog.go:90] GET /healthz: (822.388µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.350455  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (876.694µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.371635  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.053809ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.371821  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 08:34:50.390864  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.145164ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.412247  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.581212ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.412511  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 08:34:50.431276  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.572818ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.431419  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.431438  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.431463  104742 httplog.go:90] GET /healthz: (1.199667ms) 0 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:50.445671  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.445704  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.445754  104742 httplog.go:90] GET /healthz: (812.329µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.452284  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.686771ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.452526  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 08:34:50.470505  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (882.008µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.491908  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.256169ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.492211  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 08:34:50.510710  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.071095ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.532996  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.309329ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.533195  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.533214  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.533251  104742 httplog.go:90] GET /healthz: (2.326054ms) 0 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:50.533547  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 08:34:50.546290  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.546325  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.546368  104742 httplog.go:90] GET /healthz: (1.504204ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.550685  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.075828ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.572296  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.619692ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.572536  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 08:34:50.590755  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.102227ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.612204  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.551856ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.612876  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 08:34:50.631468  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.802671ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.631602  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.631619  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.631645  104742 httplog.go:90] GET /healthz: (1.377164ms) 0 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:50.645847  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.645878  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.645919  104742 httplog.go:90] GET /healthz: (1.082514ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.657873  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.579364ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.658119  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 08:34:50.670574  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (965.196µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.692293  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.632082ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.692847  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 08:34:50.710663  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (999.12µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.732658  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.732690  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.732721  104742 httplog.go:90] GET /healthz: (2.348996ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:50.734397  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.012019ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.734616  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 08:34:50.745621  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.745648  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.745681  104742 httplog.go:90] GET /healthz: (823.118µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.752304  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (953.489µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.772116  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.462362ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.772480  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 08:34:50.791292  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.56848ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.813418  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.564253ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.813677  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 08:34:50.830808  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.129106ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.831204  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.831226  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.831288  104742 httplog.go:90] GET /healthz: (913.494µs) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:50.845919  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.845948  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.845982  104742 httplog.go:90] GET /healthz: (1.122798ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.851958  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.328723ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.852180  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 08:34:50.870700  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.090782ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.892229  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.455446ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.893744  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 08:34:50.910827  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.126187ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.931559  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.916384ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:50.931768  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.931792  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.931820  104742 httplog.go:90] GET /healthz: (825.237µs) 0 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:50.931823  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 08:34:50.945622  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:50.945652  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:50.945698  104742 httplog.go:90] GET /healthz: (848.726µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.950924  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.335863ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.971940  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.265214ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:50.972286  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 08:34:50.990600  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (911.51µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.012237  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.618509ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.012458  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 08:34:51.030869  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (945.189µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.031250  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.031273  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.031302  104742 httplog.go:90] GET /healthz: (783.435µs) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:51.045996  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.046042  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.046086  104742 httplog.go:90] GET /healthz: (765.394µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.052849  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.258646ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.053087  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 08:34:51.070626  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (962.624µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.091841  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.192011ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.092298  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 08:34:51.110648  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.012464ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.137553  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.137585  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.137626  104742 httplog.go:90] GET /healthz: (6.556277ms) 0 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:51.137700  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (8.051286ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.137942  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 08:34:51.145606  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.145630  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.145664  104742 httplog.go:90] GET /healthz: (826.968µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.150532  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (939.029µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.172297  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.639996ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.172565  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 08:34:51.190954  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.098183ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.212652  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.008304ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.212911  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 08:34:51.231191  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.231231  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.231265  104742 httplog.go:90] GET /healthz: (940.063µs) 0 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:51.231272  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.584716ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.246773  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.246806  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.246839  104742 httplog.go:90] GET /healthz: (1.727067ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.251886  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.279411ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.252332  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 08:34:51.277045  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (2.479486ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.292153  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.537326ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.292557  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 08:34:51.311183  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.040102ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.331842  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.331873  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.331910  104742 httplog.go:90] GET /healthz: (1.631923ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:51.331966  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.334675ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.332177  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 08:34:51.345706  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.345731  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.345763  104742 httplog.go:90] GET /healthz: (883.028µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.350582  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (967.279µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.372378  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.669726ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.372723  104742 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 08:34:51.390634  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (968.508µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.392272  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.276664ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.412339  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.677503ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.412591  104742 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 08:34:51.430842  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.185233ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.432436  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.432468  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.432505  104742 httplog.go:90] GET /healthz: (2.104626ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:51.432930  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.690035ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.445715  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.445743  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.445775  104742 httplog.go:90] GET /healthz: (883.619µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.451984  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.348175ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.452293  104742 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 08:34:51.470689  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.071847ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.472307  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.156306ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.491791  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.138774ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.492186  104742 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 08:34:51.510647  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.023588ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.512281  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.216665ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.531286  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.531317  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.531348  104742 httplog.go:90] GET /healthz: (1.068806ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:51.531942  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.2948ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.532241  104742 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 08:34:51.545812  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.545840  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.545884  104742 httplog.go:90] GET /healthz: (996.492µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.550615  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.019495ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.552389  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.252673ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.572269  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.601063ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.572547  104742 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 08:34:51.590803  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.09559ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.592719  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.210521ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.611955  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.319944ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.612445  104742 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 08:34:51.631548  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.842082ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.631692  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.631709  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.631739  104742 httplog.go:90] GET /healthz: (1.481223ms) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:51.634661  104742 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.035971ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.645653  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.645683  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.645723  104742 httplog.go:90] GET /healthz: (846.93µs) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.652641  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.013541ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.652909  104742 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 08:34:51.671311  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.578994ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.673202  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.227913ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.692371  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.648249ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.692729  104742 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 08:34:51.710783  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.152864ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.712335  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.151625ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.732814  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.732845  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.732875  104742 httplog.go:90] GET /healthz: (2.496199ms) 0 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:51.736010  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.569437ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.736300  104742 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 08:34:51.756141  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (3.690966ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.756311  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.756338  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.756373  104742 httplog.go:90] GET /healthz: (3.551608ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38588]
I0814 08:34:51.758616  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.793723ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.773308  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.650327ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.773564  104742 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 08:34:51.790758  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.111776ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.794777  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.453755ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.812456  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.885935ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.812698  104742 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 08:34:51.830754  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.112073ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.831284  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.831315  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.831346  104742 httplog.go:90] GET /healthz: (786.133µs) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:51.832745  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.6004ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.845897  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.845930  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.845967  104742 httplog.go:90] GET /healthz: (1.06989ms) 0 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.851890  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.284052ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.852175  104742 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 08:34:51.870880  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.201231ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.872610  104742 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.307003ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.892036  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.332763ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.892343  104742 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 08:34:51.914627  104742 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.750606ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.916292  104742 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.207271ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.931458  104742 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:34:51.931485  104742 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:34:51.931517  104742 httplog.go:90] GET /healthz: (815.693µs) 0 [Go-http-client/1.1 127.0.0.1:38588]
I0814 08:34:51.932454  104742 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.429811ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.932782  104742 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 08:34:51.945616  104742 httplog.go:90] GET /healthz: (791.722µs) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.946938  104742 httplog.go:90] GET /api/v1/namespaces/default: (1.032514ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.949379  104742 httplog.go:90] POST /api/v1/namespaces: (2.00393ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.950709  104742 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (929.875µs) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.955081  104742 httplog.go:90] POST /api/v1/namespaces/default/services: (3.962431ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.956649  104742 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.095805ms) 404 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:51.958872  104742 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.938788ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:52.031145  104742 httplog.go:90] GET /healthz: (796.985µs) 200 [Go-http-client/1.1 127.0.0.1:38604]
I0814 08:34:52.037794  104742 httplog.go:90] PATCH /api/v1/namespaces/default/configmaps/test-cm?fieldManager=apply_test: (5.786918ms) 201 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:52.040906  104742 httplog.go:90] PATCH /api/v1/namespaces/default/configmaps/test-cm?fieldManager=updater: (2.732513ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:52.042722  104742 httplog.go:90] GET /api/v1/namespaces/default/configmaps/test-cm: (1.088606ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:52.043070  104742 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 08:34:52.044540  104742 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.25543ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:52.046868  104742 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.940713ms) 200 [apply.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:38604]
I0814 08:34:52.047373  104742 feature_gate.go:216] feature gates: &{map[ServerSideApply:false]}
--- FAIL: TestApplyManagedFields (5.35s)
    apply_test.go:362: Expected:
        {
        		"metadata": {
        			"name": "test-cm",
        			"namespace": "default",
        			"selfLink": "",
        			"uid": "01543447-0d8a-4c8d-8413-74d50f14c5b4",
        			"resourceVersion": "1638",
        			"creationTimestamp": "2019-08-14T08:34:52Z",
        			"labels": {
        				"test-label": "test"
        			},
        			"managedFields": [
        				{
        					"manager": "apply_test",
        					"operation": "Apply",
        					"apiVersion": "v1",
        					"fields": {
        						"f:metadata": {
        							"f:labels": {
        								"f:test-label": {}
        							}
        						}
        					}
        				},
        				{
        					"manager": "updater",
        					"operation": "Update",
        					"apiVersion": "v1",
        					"time": "2019-08-14T08:34:52Z",
        					"fields": {
        						"f:data": {
        							"f:key": {}
        						}
        					}
        				}
        			]
        		},
        		"data": {
        			"key": "new value"
        		}
        	}
        Got:
        {
        		"metadata": {
        			"name": "test-cm",
        			"namespace": "default",
        			"uid": "01543447-0d8a-4c8d-8413-74d50f14c5b4",
        			"resourceVersion": "1638",
        			"creationTimestamp": "2019-08-14T08:34:52Z",
        			"labels": {
        				"test-label": "test"
        			},
        			"managedFields": [
        				{
        					"manager": "apply_test",
        					"operation": "Apply",
        					"apiVersion": "v1",
        					"fields": {
        						"f:metadata": {
        							"f:labels": {
        								"f:test-label": {}
        							}
        						}
        					}
        				},
        				{
        					"manager": "updater",
        					"operation": "Update",
        					"apiVersion": "v1",
        					"time": "2019-08-14T08:34:52Z",
        					"fields": {
        						"f:data": {
        							"f:key": {}
        						}
        					}
        				}
        			]
        		},
        		"data": {
        			"key": "new value"
        		}
        	}

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-083326.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/master TestEmptyList 3.95s

go test -v k8s.io/kubernetes/test/integration/master -run TestEmptyList$
=== RUN   TestEmptyList
I0814 08:41:21.318857  108965 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 08:41:21.318920  108965 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 08:41:21.318944  108965 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 08:41:21.318958  108965 master.go:234] Using reconciler: 
I0814 08:41:21.323145  108965 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.323434  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.323474  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.323584  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.323712  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.326683  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.327119  108965 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 08:41:21.327218  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.327207  108965 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.327499  108965 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 08:41:21.330837  108965 watch_cache.go:405] Replace watchCache (rev: 35184) 
I0814 08:41:21.331450  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.331596  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.331902  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.332196  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.333100  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.333361  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.333764  108965 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 08:41:21.333835  108965 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 08:41:21.333915  108965 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.334300  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.334346  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.334425  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.334582  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.335252  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.335329  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.335618  108965 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 08:41:21.335676  108965 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 08:41:21.335675  108965 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.335811  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.335836  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.335890  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.335953  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.336535  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.336947  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.337195  108965 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 08:41:21.337360  108965 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 08:41:21.338123  108965 watch_cache.go:405] Replace watchCache (rev: 35185) 
I0814 08:41:21.338690  108965 watch_cache.go:405] Replace watchCache (rev: 35185) 
I0814 08:41:21.337706  108965 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.339427  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.339458  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.339515  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.339570  108965 watch_cache.go:405] Replace watchCache (rev: 35185) 
I0814 08:41:21.339675  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.340481  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.340861  108965 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 08:41:21.341056  108965 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 08:41:21.341158  108965 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.341251  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.341283  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.341359  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.341437  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.341876  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.341935  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.342116  108965 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 08:41:21.342158  108965 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 08:41:21.342434  108965 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.342615  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.342565  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.342638  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.342697  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.342792  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.343351  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.343401  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.343558  108965 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 08:41:21.343646  108965 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 08:41:21.343796  108965 watch_cache.go:405] Replace watchCache (rev: 35186) 
I0814 08:41:21.343839  108965 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.343994  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.344053  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.344301  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.344407  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.345162  108965 watch_cache.go:405] Replace watchCache (rev: 35186) 
I0814 08:41:21.345420  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.345585  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.345903  108965 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 08:41:21.345945  108965 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 08:41:21.346398  108965 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.346810  108965 watch_cache.go:405] Replace watchCache (rev: 35186) 
I0814 08:41:21.346896  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.347585  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.347798  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.347842  108965 watch_cache.go:405] Replace watchCache (rev: 35186) 
I0814 08:41:21.348009  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.349361  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.349678  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.350483  108965 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 08:41:21.350578  108965 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 08:41:21.351133  108965 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.351424  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.351738  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.352062  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.352268  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.353424  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.353620  108965 watch_cache.go:405] Replace watchCache (rev: 35188) 
I0814 08:41:21.353873  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.354365  108965 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 08:41:21.354460  108965 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 08:41:21.354937  108965 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.355133  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.355168  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.355235  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.355355  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.356744  108965 watch_cache.go:405] Replace watchCache (rev: 35188) 
I0814 08:41:21.356894  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.357231  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.357970  108965 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 08:41:21.358135  108965 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 08:41:21.358932  108965 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.359162  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.359189  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.359307  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.359686  108965 watch_cache.go:405] Replace watchCache (rev: 35188) 
I0814 08:41:21.360563  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.361178  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.361228  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.361471  108965 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 08:41:21.361678  108965 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 08:41:21.361767  108965 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.361904  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.361929  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.361981  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.362168  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.364082  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.364230  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.364422  108965 watch_cache.go:405] Replace watchCache (rev: 35188) 
I0814 08:41:21.365131  108965 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 08:41:21.365295  108965 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 08:41:21.365598  108965 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.365770  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.365804  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.365897  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.366252  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.366821  108965 watch_cache.go:405] Replace watchCache (rev: 35190) 
I0814 08:41:21.367266  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.367434  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.367613  108965 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 08:41:21.367689  108965 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.367765  108965 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 08:41:21.368147  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.368173  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.368320  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.368474  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.369372  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.369562  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.369591  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.369630  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.369744  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.369835  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.370482  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.370884  108965 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.371009  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.371064  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.371135  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.371320  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.371428  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.374840  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.375193  108965 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 08:41:21.375685  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.376296  108965 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 08:41:21.377410  108965 watch_cache.go:405] Replace watchCache (rev: 35192) 
I0814 08:41:21.378677  108965 watch_cache.go:405] Replace watchCache (rev: 35192) 
I0814 08:41:21.381116  108965 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.383270  108965 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.386278  108965 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.388740  108965 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.391386  108965 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.393548  108965 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.395595  108965 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.396364  108965 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.400747  108965 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.402136  108965 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.403979  108965 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.404664  108965 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.406782  108965 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.407657  108965 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.408761  108965 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.409244  108965 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.410358  108965 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.410706  108965 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.410899  108965 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.411087  108965 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.411349  108965 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.411616  108965 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.411902  108965 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.413124  108965 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.413468  108965 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.414851  108965 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.416437  108965 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.416853  108965 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.417267  108965 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.418447  108965 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.418789  108965 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.419933  108965 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.420898  108965 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.421845  108965 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.423045  108965 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.423448  108965 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.423623  108965 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 08:41:21.423656  108965 master.go:434] Enabling API group "authentication.k8s.io".
I0814 08:41:21.423686  108965 master.go:434] Enabling API group "authorization.k8s.io".
I0814 08:41:21.423878  108965 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.424066  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.424092  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.424155  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.424256  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.424930  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.425300  108965 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 08:41:21.425331  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.425457  108965 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 08:41:21.425548  108965 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.425656  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.425675  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.425722  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.425958  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.426494  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.426684  108965 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 08:41:21.426873  108965 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.426967  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.426979  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.427173  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.427225  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.427301  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.427386  108965 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 08:41:21.429647  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.429976  108965 watch_cache.go:405] Replace watchCache (rev: 35209) 
I0814 08:41:21.431056  108965 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 08:41:21.431092  108965 master.go:434] Enabling API group "autoscaling".
I0814 08:41:21.431164  108965 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 08:41:21.431323  108965 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.431459  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.431922  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.432160  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.432277  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.432796  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.433001  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.432852  108965 watch_cache.go:405] Replace watchCache (rev: 35210) 
I0814 08:41:21.433149  108965 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 08:41:21.433407  108965 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.433480  108965 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 08:41:21.433502  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.433511  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.433546  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.433762  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.434552  108965 watch_cache.go:405] Replace watchCache (rev: 35210) 
I0814 08:41:21.434860  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.434934  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.435587  108965 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 08:41:21.435624  108965 master.go:434] Enabling API group "batch".
I0814 08:41:21.435799  108965 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 08:41:21.435885  108965 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.436052  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.436086  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.436149  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.436418  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.436860  108965 watch_cache.go:405] Replace watchCache (rev: 35210) 
I0814 08:41:21.436873  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.437587  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.438321  108965 watch_cache.go:405] Replace watchCache (rev: 35210) 
I0814 08:41:21.440299  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.441140  108965 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 08:41:21.441257  108965 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 08:41:21.442005  108965 master.go:434] Enabling API group "certificates.k8s.io".
I0814 08:41:21.442912  108965 watch_cache.go:405] Replace watchCache (rev: 35210) 
I0814 08:41:21.443491  108965 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.443650  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.443667  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.443725  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.443898  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.444716  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.444818  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.448618  108965 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 08:41:21.448760  108965 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 08:41:21.448981  108965 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.449273  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.449322  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.449436  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.449625  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.450606  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.450766  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.452631  108965 watch_cache.go:405] Replace watchCache (rev: 35214) 
I0814 08:41:21.452655  108965 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 08:41:21.454407  108965 watch_cache.go:405] Replace watchCache (rev: 35214) 
I0814 08:41:21.452232  108965 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 08:41:21.461906  108965 master.go:434] Enabling API group "coordination.k8s.io".
I0814 08:41:21.462666  108965 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.464529  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.464901  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.465225  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.465574  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.475755  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.475844  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.476308  108965 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 08:41:21.476465  108965 master.go:434] Enabling API group "extensions".
I0814 08:41:21.476460  108965 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 08:41:21.477187  108965 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.477673  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.477730  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.477843  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.478302  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.479745  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.480190  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.480279  108965 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 08:41:21.480306  108965 watch_cache.go:405] Replace watchCache (rev: 35223) 
I0814 08:41:21.480359  108965 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 08:41:21.480594  108965 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.480776  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.480808  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.480875  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.480999  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.481991  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.482434  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.485260  108965 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 08:41:21.485447  108965 master.go:434] Enabling API group "networking.k8s.io".
I0814 08:41:21.483969  108965 watch_cache.go:405] Replace watchCache (rev: 35223) 
I0814 08:41:21.485630  108965 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 08:41:21.488010  108965 watch_cache.go:405] Replace watchCache (rev: 35223) 
I0814 08:41:21.485753  108965 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.489702  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.489897  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.490183  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.490591  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.491780  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.492766  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.494260  108965 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 08:41:21.497234  108965 master.go:434] Enabling API group "node.k8s.io".
I0814 08:41:21.494412  108965 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 08:41:21.497659  108965 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.498943  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.499193  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.499307  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.499564  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.502295  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.502386  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.502888  108965 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 08:41:21.503316  108965 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 08:41:21.503405  108965 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.506328  108965 watch_cache.go:405] Replace watchCache (rev: 35227) 
I0814 08:41:21.507458  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.507525  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.507661  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.507767  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.508307  108965 watch_cache.go:405] Replace watchCache (rev: 35227) 
I0814 08:41:21.510880  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.511170  108965 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 08:41:21.511205  108965 master.go:434] Enabling API group "policy".
I0814 08:41:21.511239  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.511286  108965 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.511325  108965 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 08:41:21.511400  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.511411  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.511456  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.511650  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.536470  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.536580  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.536752  108965 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 08:41:21.537076  108965 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.537209  108965 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 08:41:21.537297  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.537334  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.537394  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.537595  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.538316  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.538508  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.538684  108965 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 08:41:21.538791  108965 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 08:41:21.538766  108965 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.538954  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.538989  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.539237  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.539387  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.540428  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.540541  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.540789  108965 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 08:41:21.541239  108965 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 08:41:21.541641  108965 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.541913  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.541990  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.542159  108965 watch_cache.go:405] Replace watchCache (rev: 35237) 
I0814 08:41:21.542182  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.542370  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.543438  108965 watch_cache.go:405] Replace watchCache (rev: 35237) 
I0814 08:41:21.543740  108965 watch_cache.go:405] Replace watchCache (rev: 35237) 
I0814 08:41:21.544824  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.544950  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.545308  108965 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 08:41:21.545427  108965 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 08:41:21.545495  108965 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.546084  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.546115  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.546296  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.546415  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.547425  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.547713  108965 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 08:41:21.548132  108965 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.548293  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.548317  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.548387  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.548517  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.548641  108965 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 08:41:21.549150  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.549676  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.552763  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.553241  108965 watch_cache.go:405] Replace watchCache (rev: 35237) 
I0814 08:41:21.553561  108965 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 08:41:21.553641  108965 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.553780  108965 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 08:41:21.553801  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.554076  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.554171  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.554267  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.555067  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.555255  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.555352  108965 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 08:41:21.555616  108965 watch_cache.go:405] Replace watchCache (rev: 35239) 
I0814 08:41:21.555762  108965 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.555946  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.555978  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.556193  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.556288  108965 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 08:41:21.556566  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.557283  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.557531  108965 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 08:41:21.557589  108965 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 08:41:21.557989  108965 watch_cache.go:405] Replace watchCache (rev: 35239) 
I0814 08:41:21.558344  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.558706  108965 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 08:41:21.560104  108965 watch_cache.go:405] Replace watchCache (rev: 35241) 
I0814 08:41:21.561159  108965 watch_cache.go:405] Replace watchCache (rev: 35239) 
I0814 08:41:21.563340  108965 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.563585  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.563624  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.563643  108965 watch_cache.go:405] Replace watchCache (rev: 35230) 
I0814 08:41:21.563712  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.564006  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.565174  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.565507  108965 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 08:41:21.565691  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.565834  108965 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 08:41:21.565845  108965 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.566198  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.566240  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.566343  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.566475  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.567233  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.567423  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.567624  108965 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 08:41:21.567676  108965 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 08:41:21.567713  108965 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 08:41:21.567827  108965 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 08:41:21.568090  108965 watch_cache.go:405] Replace watchCache (rev: 35243) 
I0814 08:41:21.568227  108965 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.568426  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.568459  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.568551  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.568627  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.569615  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.570064  108965 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 08:41:21.570360  108965 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.570564  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.570589  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.570625  108965 watch_cache.go:405] Replace watchCache (rev: 35243) 
I0814 08:41:21.570679  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.570762  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.570804  108965 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 08:41:21.571222  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.571803  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.572139  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.572257  108965 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 08:41:21.572335  108965 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.572409  108965 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 08:41:21.572569  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.572606  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.572659  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.572793  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.574283  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.574553  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.574601  108965 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 08:41:21.574672  108965 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 08:41:21.574662  108965 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.574800  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.574819  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.574875  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.575081  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.579356  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.579590  108965 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 08:41:21.579768  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.579903  108965 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 08:41:21.579884  108965 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.580309  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.580328  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.580376  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.580476  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.586373  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.586490  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.586821  108965 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 08:41:21.587171  108965 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.587395  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.587425  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.587498  108965 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 08:41:21.587533  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.587802  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.597135  108965 watch_cache.go:405] Replace watchCache (rev: 35246) 
I0814 08:41:21.597544  108965 watch_cache.go:405] Replace watchCache (rev: 35246) 
I0814 08:41:21.597722  108965 watch_cache.go:405] Replace watchCache (rev: 35246) 
I0814 08:41:21.599823  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.600802  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.600828  108965 watch_cache.go:405] Replace watchCache (rev: 35246) 
I0814 08:41:21.605311  108965 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 08:41:21.605404  108965 master.go:434] Enabling API group "storage.k8s.io".
I0814 08:41:21.605478  108965 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 08:41:21.605768  108965 watch_cache.go:405] Replace watchCache (rev: 35248) 
I0814 08:41:21.605780  108965 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.606150  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.606174  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.606247  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.606347  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.607930  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.608219  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.608328  108965 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 08:41:21.608415  108965 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 08:41:21.608663  108965 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.610709  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.610745  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.610881  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.611003  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.611798  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.612212  108965 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 08:41:21.612558  108965 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.612708  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.612729  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.612803  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.612900  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.612977  108965 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 08:41:21.613414  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.613449  108965 watch_cache.go:405] Replace watchCache (rev: 35249) 
I0814 08:41:21.614035  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.614085  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.614393  108965 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 08:41:21.614543  108965 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 08:41:21.614667  108965 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.614840  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.614853  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.614935  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.615165  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.615851  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.615940  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.616099  108965 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 08:41:21.616233  108965 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 08:41:21.616336  108965 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.616525  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.616537  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.616599  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.616653  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.617183  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.617239  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.617542  108965 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 08:41:21.617584  108965 master.go:434] Enabling API group "apps".
I0814 08:41:21.617671  108965 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.617745  108965 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 08:41:21.617819  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.617896  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.617995  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.618222  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.619473  108965 watch_cache.go:405] Replace watchCache (rev: 35252) 
I0814 08:41:21.619680  108965 watch_cache.go:405] Replace watchCache (rev: 35252) 
I0814 08:41:21.620235  108965 watch_cache.go:405] Replace watchCache (rev: 35252) 
I0814 08:41:21.620783  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.620845  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.621283  108965 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 08:41:21.621369  108965 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.621412  108965 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 08:41:21.621473  108965 watch_cache.go:405] Replace watchCache (rev: 35252) 
I0814 08:41:21.621500  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.621520  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.621607  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.621795  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.622374  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.622424  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.622673  108965 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 08:41:21.622750  108965 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.622949  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.622963  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.623077  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.623207  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.623277  108965 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 08:41:21.624979  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.625379  108965 watch_cache.go:405] Replace watchCache (rev: 35252) 
I0814 08:41:21.625503  108965 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 08:41:21.626053  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.626251  108965 watch_cache.go:405] Replace watchCache (rev: 35252) 
I0814 08:41:21.625965  108965 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.626471  108965 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 08:41:21.626573  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.626763  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.627108  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.627352  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.628855  108965 watch_cache.go:405] Replace watchCache (rev: 35252) 
I0814 08:41:21.629066  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.629513  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.629643  108965 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 08:41:21.629695  108965 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 08:41:21.629719  108965 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 08:41:21.629752  108965 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.630278  108965 client.go:354] parsed scheme: ""
I0814 08:41:21.630300  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:21.630400  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:21.630459  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.631222  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:21.631485  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:21.631589  108965 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 08:41:21.631641  108965 master.go:434] Enabling API group "events.k8s.io".
I0814 08:41:21.631703  108965 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 08:41:21.632163  108965 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.632576  108965 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.632980  108965 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.633271  108965 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.633490  108965 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.633734  108965 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.634000  108965 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.634606  108965 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.634829  108965 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.635157  108965 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.636513  108965 watch_cache.go:405] Replace watchCache (rev: 35252) 
I0814 08:41:21.636983  108965 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.637672  108965 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.639763  108965 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.641077  108965 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.645688  108965 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.646324  108965 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.646898  108965 watch_cache.go:405] Replace watchCache (rev: 35255) 
I0814 08:41:21.649881  108965 watch_cache.go:405] Replace watchCache (rev: 35255) 
I0814 08:41:21.656798  108965 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.657431  108965 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.668340  108965 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.669879  108965 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:41:21.670135  108965 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 08:41:21.671976  108965 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.672388  108965 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.674219  108965 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.696114  108965 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.735617  108965 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.738756  108965 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.746859  108965 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.769610  108965 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.784048  108965 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.796777  108965 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.798532  108965 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:41:21.798706  108965 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 08:41:21.800654  108965 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.801805  108965 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.803052  108965 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.804543  108965 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.805627  108965 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.806871  108965 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.808349  108965 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.809587  108965 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.810632  108965 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.811766  108965 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.813156  108965 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:41:21.813262  108965 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 08:41:21.814469  108965 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.815651  108965 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:41:21.815774  108965 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 08:41:21.817144  108965 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.818278  108965 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.818787  108965 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.820119  108965 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.821318  108965 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.822392  108965 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.823684  108965 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:41:21.823810  108965 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 08:41:21.825607  108965 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.827184  108965 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.827902  108965 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.829631  108965 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.830430  108965 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.831157  108965 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.833008  108965 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.833790  108965 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.834371  108965 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.835873  108965 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.836351  108965 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.836732  108965 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:41:21.836844  108965 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 08:41:21.836867  108965 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 08:41:21.838002  108965 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.838992  108965 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.840486  108965 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.841698  108965 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.843657  108965 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"0f438f92-8e89-42c8-bfd5-0dbe392c698c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:41:21.849729  108965 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:41:21.849789  108965 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 08:41:21.849822  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:21.849847  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:21.849870  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:21.849891  108965 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:21.849982  108965 httplog.go:90] GET /healthz: (431.807µs) 0 [Go-http-client/1.1 127.0.0.1:58088]
I0814 08:41:21.851923  108965 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (3.638183ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58090]
I0814 08:41:21.857989  108965 httplog.go:90] GET /api/v1/services: (3.038512ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58090]
I0814 08:41:21.869750  108965 httplog.go:90] GET /api/v1/services: (5.488515ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58090]
I0814 08:41:21.886129  108965 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:41:21.886217  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:21.886259  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:21.886285  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:21.886305  108965 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:21.886377  108965 httplog.go:90] GET /healthz: (527.455µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58090]
I0814 08:41:21.887583  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.653587ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58088]
I0814 08:41:21.889553  108965 httplog.go:90] GET /api/v1/services: (2.261793ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58090]
I0814 08:41:21.889909  108965 httplog.go:90] GET /api/v1/services: (2.704685ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:21.891860  108965 httplog.go:90] POST /api/v1/namespaces: (3.375652ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58088]
I0814 08:41:21.896965  108965 httplog.go:90] GET /api/v1/namespaces/kube-public: (3.980809ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:21.907458  108965 httplog.go:90] POST /api/v1/namespaces: (5.483446ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:21.913382  108965 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (4.557947ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:21.921686  108965 httplog.go:90] POST /api/v1/namespaces: (7.488387ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:21.952307  108965 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:41:21.952370  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:21.952400  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:21.952412  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:21.952419  108965 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:21.952497  108965 httplog.go:90] GET /healthz: (604.52µs) 0 [Go-http-client/1.1 127.0.0.1:58122]
I0814 08:41:21.988164  108965 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:41:21.988240  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:21.988346  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:21.988357  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:21.988374  108965 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:21.988465  108965 httplog.go:90] GET /healthz: (669.311µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:22.051901  108965 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:41:22.051947  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.051986  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.052029  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.052066  108965 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.052135  108965 httplog.go:90] GET /healthz: (463.565µs) 0 [Go-http-client/1.1 127.0.0.1:58122]
I0814 08:41:22.087810  108965 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:41:22.087877  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.087912  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.087932  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.087945  108965 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.088010  108965 httplog.go:90] GET /healthz: (499.095µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:22.152114  108965 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:41:22.152189  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.152212  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.152238  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.152263  108965 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.152326  108965 httplog.go:90] GET /healthz: (575.337µs) 0 [Go-http-client/1.1 127.0.0.1:58122]
I0814 08:41:22.188036  108965 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:41:22.188137  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.188185  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.188200  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.188229  108965 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.188343  108965 httplog.go:90] GET /healthz: (680.998µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:22.251954  108965 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:41:22.252059  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.252088  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.252113  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.252141  108965 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.252193  108965 httplog.go:90] GET /healthz: (528.641µs) 0 [Go-http-client/1.1 127.0.0.1:58122]
I0814 08:41:22.287875  108965 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:41:22.287946  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.287980  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.287997  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.288006  108965 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.288114  108965 httplog.go:90] GET /healthz: (528.387µs) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:22.318768  108965 client.go:354] parsed scheme: ""
I0814 08:41:22.318822  108965 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:41:22.318924  108965 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:41:22.319229  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:22.320074  108965 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:41:22.320219  108965 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:41:22.353988  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.354083  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.354121  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.354135  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.354242  108965 httplog.go:90] GET /healthz: (2.458313ms) 0 [Go-http-client/1.1 127.0.0.1:58122]
I0814 08:41:22.398231  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.398294  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.398316  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.398329  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.398421  108965 httplog.go:90] GET /healthz: (10.538406ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:22.455426  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.455485  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.455510  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.455544  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.455655  108965 httplog.go:90] GET /healthz: (3.817348ms) 0 [Go-http-client/1.1 127.0.0.1:58122]
I0814 08:41:22.492380  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.492442  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.492467  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.492493  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.492601  108965 httplog.go:90] GET /healthz: (5.052159ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:22.565147  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.565207  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.565249  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.565290  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.565404  108965 httplog.go:90] GET /healthz: (11.570868ms) 0 [Go-http-client/1.1 127.0.0.1:58122]
I0814 08:41:22.589839  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.589889  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.589908  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.589918  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.590007  108965 httplog.go:90] GET /healthz: (2.48036ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:22.654800  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.654883  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.654917  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.654944  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.655109  108965 httplog.go:90] GET /healthz: (3.294915ms) 0 [Go-http-client/1.1 127.0.0.1:58122]
I0814 08:41:22.700443  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.700510  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.700573  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.700597  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.700738  108965 httplog.go:90] GET /healthz: (7.806083ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:22.754501  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.754588  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.754623  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.754663  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.754776  108965 httplog.go:90] GET /healthz: (3.057855ms) 0 [Go-http-client/1.1 127.0.0.1:58122]
I0814 08:41:22.791743  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.791798  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.791832  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.791851  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.791963  108965 httplog.go:90] GET /healthz: (4.34542ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:22.854390  108965 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (5.633868ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58090]
I0814 08:41:22.854433  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.166147ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:22.854443  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (5.755728ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58122]
I0814 08:41:22.862521  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.330248ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58090]
I0814 08:41:22.863054  108965 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (7.435789ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58632]
I0814 08:41:22.863739  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.863776  108965 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:41:22.863796  108965 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:41:22.863815  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:41:22.863868  108965 httplog.go:90] GET /healthz: (9.330526ms) 0 [Go-http-client/1.1 127.0.0.1:58628]
I0814 08:41:22.867598  108965 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (12.346641ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:22.867987  108965 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 08:41:22.873058  108965 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (4.478477ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:22.874948  108965 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (9.708483ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58632]
I0814 08:41:22.877728  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (12.415168ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58090]
I0814 08:41:22.884399  108965 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (10.126239ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:22.884841  108965 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 08:41:22.884921  108965 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 08:41:22.886073  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (6.846304ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58632]
I0814 08:41:22.893432  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (6.091296ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:22.893437  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.893491  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:22.893563  108965 httplog.go:90] GET /healthz: (6.177461ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:22.898101  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (3.3425ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:22.901334  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (2.208835ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:22.904949  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.968098ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:22.908333  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.347694ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:22.913509  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (3.66662ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:22.937396  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (22.559687ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:22.941284  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 08:41:22.948664  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (6.314034ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:22.958521  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.958591  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:22.958687  108965 httplog.go:90] GET /healthz: (3.355719ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:22.961990  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (11.775306ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:22.962879  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 08:41:22.983793  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (20.209535ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:22.991564  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:22.991645  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:22.991784  108965 httplog.go:90] GET /healthz: (4.31066ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.003743  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (16.052108ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.004228  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 08:41:23.006873  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (2.147462ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.024039  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (16.063662ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.024643  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 08:41:23.030565  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (3.943754ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.043407  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (7.430671ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.043935  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 08:41:23.048728  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (4.179313ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.055639  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.055695  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.055745  108965 httplog.go:90] GET /healthz: (1.942765ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:23.063747  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (13.831959ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.064296  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 08:41:23.068400  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (3.535021ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.082863  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (13.393233ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.083456  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 08:41:23.088385  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (4.375119ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.090674  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.090772  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.090829  108965 httplog.go:90] GET /healthz: (3.373815ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.100825  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (10.259075ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.102586  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 08:41:23.105360  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (2.28176ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.112639  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.344274ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.113710  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 08:41:23.118943  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (4.754028ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.123712  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.35523ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.124852  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 08:41:23.127033  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.869834ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.131412  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.506116ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.131913  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 08:41:23.135444  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (2.415126ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.143497  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.747232ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.144513  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 08:41:23.153684  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (8.127149ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.160580  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.160658  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.160757  108965 httplog.go:90] GET /healthz: (9.137179ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:23.161166  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.61728ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.161584  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 08:41:23.174938  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (12.99849ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.181271  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.234541ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.186058  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 08:41:23.191073  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.191151  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.191231  108965 httplog.go:90] GET /healthz: (3.463111ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.191522  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (4.741722ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.198178  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.941037ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.198605  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 08:41:23.202348  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (2.980747ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.217308  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (13.639948ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.217858  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 08:41:23.233732  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (15.029233ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.244231  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (9.070794ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.244755  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 08:41:23.248384  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (3.173508ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.256354  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.256415  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.256504  108965 httplog.go:90] GET /healthz: (2.860847ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:23.257456  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.116309ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.257670  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 08:41:23.259377  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.480545ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.268420  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.595999ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.268719  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 08:41:23.269945  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.032946ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.273215  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.480426ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.273648  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 08:41:23.275218  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.252827ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.278654  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.87032ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.278905  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 08:41:23.280042  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (942.657µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.296333  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.296385  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.296456  108965 httplog.go:90] GET /healthz: (8.84331ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.298080  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (16.995058ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.309543  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 08:41:23.311979  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (2.137621ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.319846  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.902885ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.320332  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 08:41:23.322439  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.72047ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.326761  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.744628ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.327340  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 08:41:23.331249  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (3.641007ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.334714  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.87364ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.335100  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 08:41:23.338789  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (3.342712ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.341674  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.274091ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.343144  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 08:41:23.345117  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.529337ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.348562  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.756793ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.348965  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 08:41:23.350876  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.66708ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.353063  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.353090  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.353121  108965 httplog.go:90] GET /healthz: (1.362613ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:23.354341  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.975496ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.354702  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 08:41:23.357617  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (2.755564ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.359709  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.720683ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.361079  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 08:41:23.363136  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.481445ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.366028  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.360301ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.366498  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 08:41:23.368702  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.9922ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.370922  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.787835ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.371205  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 08:41:23.372519  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.092743ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.377979  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.007566ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.378303  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 08:41:23.388332  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.388382  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.388434  108965 httplog.go:90] GET /healthz: (1.163815ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.389147  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (10.630501ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.394575  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.927014ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.395130  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 08:41:23.397158  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.66917ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.400431  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.316232ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.400635  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 08:41:23.411688  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (10.845526ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.416788  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.494192ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.417155  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 08:41:23.418541  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.088524ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.424686  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.515036ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.424968  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 08:41:23.427074  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.851826ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.429580  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.794846ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.430364  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 08:41:23.431849  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.299737ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.438361  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.965167ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.438794  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 08:41:23.440370  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.222092ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.445470  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.651763ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.445769  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 08:41:23.448851  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (2.712155ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.453681  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.453721  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.453766  108965 httplog.go:90] GET /healthz: (2.453277ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:23.454081  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.388201ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.454292  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 08:41:23.455988  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.459401ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.460616  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.865997ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.460865  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 08:41:23.462173  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.004263ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.465616  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.886034ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.465905  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 08:41:23.471720  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (5.580663ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.480528  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.158683ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.480825  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 08:41:23.482411  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.315598ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.487240  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.212306ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.487576  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 08:41:23.488766  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.488803  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.489110  108965 httplog.go:90] GET /healthz: (1.917628ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.490890  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (2.617854ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.501416  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.669928ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.501671  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 08:41:23.503610  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.686105ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.506771  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.439557ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.506984  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 08:41:23.509535  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.762184ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.511940  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.736711ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.512215  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 08:41:23.513608  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.116952ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.518893  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.480184ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.519306  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 08:41:23.520568  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.023657ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.522541  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.543314ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.522872  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 08:41:23.524416  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.328692ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.539046  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (14.164154ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.539867  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 08:41:23.545407  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (5.132504ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.552772  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.552811  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.552862  108965 httplog.go:90] GET /healthz: (1.429257ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:23.554659  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.483167ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.555075  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 08:41:23.557700  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.128102ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.574208  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (14.037702ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.575704  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 08:41:23.577768  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.42984ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.581268  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.763067ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.581444  108965 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 08:41:23.582528  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (899.385µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.587249  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.973059ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.587631  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 08:41:23.589967  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.589997  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.590168  108965 httplog.go:90] GET /healthz: (3.039668ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.590381  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.520061ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.593728  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.986786ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.594410  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 08:41:23.595589  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (939.135µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.599122  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.327477ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.599434  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 08:41:23.606711  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (7.013251ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.610455  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.880446ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.610733  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 08:41:23.619134  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (8.019543ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.621757  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.929666ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.622272  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 08:41:23.623619  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (978.717µs) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.629159  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.824061ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.629408  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 08:41:23.631238  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.534215ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.634318  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.118781ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.634789  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 08:41:23.636300  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.011846ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.638587  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.762918ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.638779  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 08:41:23.640303  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.343994ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.643886  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.942867ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.644391  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 08:41:23.645978  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.345017ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.649723  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.206256ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.650048  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 08:41:23.652454  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.063279ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.653503  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.653538  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.653580  108965 httplog.go:90] GET /healthz: (2.285166ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:23.668317  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (15.084195ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.668776  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 08:41:23.689009  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.689133  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.689183  108965 httplog.go:90] GET /healthz: (1.304523ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.689752  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (11.673082ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.692845  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.46488ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.693115  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 08:41:23.695843  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.394041ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.699942  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.507234ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.700359  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 08:41:23.703428  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (2.654647ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.711645  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.715787ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.711866  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 08:41:23.731067  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (2.333513ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.751613  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.798992ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.751967  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 08:41:23.753418  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.753452  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.753506  108965 httplog.go:90] GET /healthz: (1.414056ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:23.770872  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (2.150922ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.788802  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.788867  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.788916  108965 httplog.go:90] GET /healthz: (1.556058ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.792789  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.086408ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.793171  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 08:41:23.812447  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (2.021123ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.833388  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.621046ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.833825  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 08:41:23.851087  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.255063ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.853125  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.853182  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.853240  108965 httplog.go:90] GET /healthz: (1.379583ms) 0 [Go-http-client/1.1 127.0.0.1:58628]
I0814 08:41:23.870962  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.176329ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.871580  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 08:41:23.888940  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.888978  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.889063  108965 httplog.go:90] GET /healthz: (1.601531ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:23.891104  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.333744ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.911864  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.013363ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.912359  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 08:41:23.929986  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.301881ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.951681  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.879725ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.951980  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 08:41:23.952927  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.952957  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.953000  108965 httplog.go:90] GET /healthz: (1.012107ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:23.970668  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.911593ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.988320  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:23.988361  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:23.988450  108965 httplog.go:90] GET /healthz: (1.204257ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.990874  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.982447ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:23.991270  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 08:41:24.010472  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.581523ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.030755  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.001399ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.031046  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 08:41:24.050231  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.517282ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.052310  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.052339  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.052382  108965 httplog.go:90] GET /healthz: (1.087411ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:24.073835  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.627201ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.074113  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 08:41:24.088847  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.088880  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.088921  108965 httplog.go:90] GET /healthz: (1.496778ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.090346  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.556378ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.111475  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.523932ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.111784  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 08:41:24.131987  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (3.19454ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.152139  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.399133ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.152810  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 08:41:24.153476  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.153504  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.153546  108965 httplog.go:90] GET /healthz: (2.185496ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:24.170302  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.601695ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.189111  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.189151  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.189200  108965 httplog.go:90] GET /healthz: (1.897398ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.191220  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.318682ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.191438  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 08:41:24.210837  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (2.093297ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.232689  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.883843ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.233069  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 08:41:24.257224  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (8.472868ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.258523  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.258560  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.258608  108965 httplog.go:90] GET /healthz: (7.234356ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:24.272171  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.419245ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.272542  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 08:41:24.290793  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.290838  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.290907  108965 httplog.go:90] GET /healthz: (3.464891ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.290923  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.341343ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.327607  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (18.801802ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.327960  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 08:41:24.338604  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (9.958303ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.350949  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.193102ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.351786  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 08:41:24.352349  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.352375  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.352410  108965 httplog.go:90] GET /healthz: (948.841µs) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:24.372891  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.509726ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.389403  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.389438  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.389476  108965 httplog.go:90] GET /healthz: (2.125821ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.394634  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.867107ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.395300  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 08:41:24.410441  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.67719ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.430799  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.042911ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.431280  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 08:41:24.450772  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (2.005753ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.453101  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.453265  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.453458  108965 httplog.go:90] GET /healthz: (1.635004ms) 0 [Go-http-client/1.1 127.0.0.1:58628]
I0814 08:41:24.471491  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.659938ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.471874  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 08:41:24.488870  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.488904  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.488943  108965 httplog.go:90] GET /healthz: (1.504786ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.490417  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.604477ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.511318  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.411825ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.511626  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 08:41:24.531092  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (2.36405ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.551631  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.804565ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.552482  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 08:41:24.553456  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.553507  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.553555  108965 httplog.go:90] GET /healthz: (1.645941ms) 0 [Go-http-client/1.1 127.0.0.1:58628]
I0814 08:41:24.570746  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.874727ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.588426  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.588476  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.588524  108965 httplog.go:90] GET /healthz: (1.155093ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.591335  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.721268ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.591684  108965 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 08:41:24.611803  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (2.905818ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.615130  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.963074ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.642366  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (10.678452ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.642779  108965 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 08:41:24.654277  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (5.597607ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.656503  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.656548  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.656605  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.721011ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.656607  108965 httplog.go:90] GET /healthz: (5.1506ms) 0 [Go-http-client/1.1 127.0.0.1:58628]
I0814 08:41:24.671530  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.762322ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.671909  108965 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 08:41:24.688601  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.688649  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.688725  108965 httplog.go:90] GET /healthz: (1.214602ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.690116  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.366791ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.692410  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.766742ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.711711  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.738759ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.712161  108965 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 08:41:24.730545  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.730555ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.733475  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.259841ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.753610  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.753667  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.753748  108965 httplog.go:90] GET /healthz: (1.437921ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:24.755593  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (6.629381ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.755969  108965 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 08:41:24.779196  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (3.446189ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.784772  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.612031ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.789247  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.789293  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.789342  108965 httplog.go:90] GET /healthz: (1.862353ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.792642  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.822059ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.793141  108965 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 08:41:24.811165  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (2.253706ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.815675  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.547895ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.831544  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.733976ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.831899  108965 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 08:41:24.850953  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (2.176864ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.852959  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.852974  108965 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.390859ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.853033  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.853082  108965 httplog.go:90] GET /healthz: (1.714762ms) 0 [Go-http-client/1.1 127.0.0.1:58628]
I0814 08:41:24.877751  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (8.942635ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.878151  108965 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 08:41:24.889817  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.889863  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.889914  108965 httplog.go:90] GET /healthz: (2.478957ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.891456  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (2.665988ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.894906  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.448669ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.912429  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.294998ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.912765  108965 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 08:41:24.931356  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (2.524011ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.935082  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.74659ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.952154  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.655069ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:24.952617  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:24.952722  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:24.952864  108965 httplog.go:90] GET /healthz: (1.574617ms) 0 [Go-http-client/1.1 127.0.0.1:58628]
I0814 08:41:24.952632  108965 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 08:41:24.973217  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (3.909275ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.992221  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (13.354953ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.999462  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.255813ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:24.999836  108965 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 08:41:25.001414  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:25.001461  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:25.001545  108965 httplog.go:90] GET /healthz: (12.660873ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.014935  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (3.841678ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.024595  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (8.404572ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.035218  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (6.400876ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.035644  108965 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 08:41:25.051239  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (2.426651ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.053398  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:25.053550  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:25.053596  108965 httplog.go:90] GET /healthz: (2.141951ms) 0 [Go-http-client/1.1 127.0.0.1:58628]
I0814 08:41:25.053810  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.738216ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.073214  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.470639ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.075215  108965 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 08:41:25.092821  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (4.15449ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:25.092989  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:25.093034  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:25.093097  108965 httplog.go:90] GET /healthz: (5.756753ms) 0 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.098968  108965 httplog.go:90] GET /api/v1/namespaces/kube-system: (5.222161ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:25.114247  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (4.149904ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:25.114949  108965 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 08:41:25.131853  108965 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (2.923572ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:25.134378  108965 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.932314ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:25.152090  108965 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.274702ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58628]
I0814 08:41:25.152797  108965 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:41:25.152831  108965 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:41:25.152870  108965 httplog.go:90] GET /healthz: (1.513971ms) 0 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:25.153180  108965 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 08:41:25.191979  108965 httplog.go:90] GET /healthz: (4.399491ms) 200 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.194349  108965 httplog.go:90] GET /api/v1/namespaces/default: (1.662424ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.198855  108965 httplog.go:90] POST /api/v1/namespaces: (3.869083ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.201314  108965 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.796098ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.207329  108965 httplog.go:90] POST /api/v1/namespaces/default/services: (5.140234ms) 201 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.212725  108965 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (3.292714ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
I0814 08:41:25.217467  108965 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (951.66µs) 422 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
E0814 08:41:25.217719  108965 controller.go:218] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I0814 08:41:25.255368  108965 httplog.go:90] GET /healthz: (1.669726ms) 200 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:25.258607  108965 httplog.go:90] GET /api/v1/namespaces/default/pods: (2.79838ms) 200 [Go-http-client/1.1 127.0.0.1:58622]
I0814 08:41:25.258873  108965 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 08:41:25.262256  108965 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.541256ms) 404 [master.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:58622]
--- FAIL: TestEmptyList (3.95s)
    synthetic_master_test.go:139: body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"36563"},"items":null}
    synthetic_master_test.go:140: nil items field from empty list (all lists should return non-nil empty items lists)

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-083326.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestPreemptWithPermitPlugin 1m4s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemptWithPermitPlugin$
=== RUN   TestPreemptWithPermitPlugin
I0814 08:40:09.853269  110468 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0814 08:40:09.853297  110468 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0814 08:40:09.853309  110468 master.go:278] Node port range unspecified. Defaulting to 30000-32767.
I0814 08:40:09.853319  110468 master.go:234] Using reconciler: 
I0814 08:40:09.855053  110468 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.855149  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.855270  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.855369  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.855483  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.855816  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.855970  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.856337  110468 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0814 08:40:09.856528  110468 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.856862  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.857131  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.857170  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.856428  110468 reflector.go:160] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0814 08:40:09.857336  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.857857  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.857938  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.858127  110468 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 08:40:09.858162  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.858162  110468 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.858209  110468 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 08:40:09.858226  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.858237  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.858269  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.858322  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.858597  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.858690  110468 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0814 08:40:09.858724  110468 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.858778  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.858790  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.858818  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.858831  110468 reflector.go:160] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0814 08:40:09.858777  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.858962  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.859186  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.859262  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.859355  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.859362  110468 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0814 08:40:09.859392  110468 reflector.go:160] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0814 08:40:09.859491  110468 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.859543  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.859553  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.859581  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.859623  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.859672  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.859934  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.859981  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.860058  110468 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0814 08:40:09.860141  110468 reflector.go:160] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0814 08:40:09.860203  110468 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.860284  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.860294  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.860323  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.860372  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.860624  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.860698  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.860717  110468 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0814 08:40:09.860749  110468 reflector.go:160] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0814 08:40:09.860843  110468 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.860907  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.860916  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.860949  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.861084  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.861191  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.861397  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.861466  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.861518  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.861556  110468 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0814 08:40:09.861630  110468 reflector.go:160] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0814 08:40:09.861676  110468 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.861742  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.861751  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.861785  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.861823  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.862091  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.862128  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.862165  110468 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0814 08:40:09.862234  110468 reflector.go:160] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0814 08:40:09.862313  110468 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.862369  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.862378  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.862409  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.862449  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.863150  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.863278  110468 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0814 08:40:09.863403  110468 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.863462  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.863471  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.863500  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.863529  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.863557  110468 reflector.go:160] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0814 08:40:09.863745  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.863990  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.864061  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.864142  110468 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0814 08:40:09.864275  110468 reflector.go:160] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0814 08:40:09.864280  110468 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.864337  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.864345  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.864372  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.864483  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.864711  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.864812  110468 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0814 08:40:09.864945  110468 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.865004  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.865035  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.865067  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.865105  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.865135  110468 reflector.go:160] Listing and watching *core.Node from storage/cacher.go:/minions
I0814 08:40:09.865366  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.865632  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.865728  110468 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0814 08:40:09.865846  110468 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.865905  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.865915  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.865940  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.865976  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.866004  110468 reflector.go:160] Listing and watching *core.Pod from storage/cacher.go:/pods
I0814 08:40:09.866231  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.866463  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.866530  110468 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0814 08:40:09.866643  110468 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.866700  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.866710  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.866743  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.866786  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.866812  110468 reflector.go:160] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0814 08:40:09.866990  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.867255  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.867337  110468 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0814 08:40:09.867360  110468 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.867442  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.867452  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.867478  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.867529  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.867554  110468 reflector.go:160] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0814 08:40:09.867722  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.867954  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.868200  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.870678  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.870697  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.870729  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.870773  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.871079  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.871230  110468 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.871283  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.871293  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.871322  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.871368  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.871404  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.872969  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.873057  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.873083  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.873191  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.873207  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.873308  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.873619  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.873636  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.873705  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.874916  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.875002  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.875089  110468 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0814 08:40:09.875377  110468 reflector.go:160] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0814 08:40:09.875586  110468 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.875989  110468 watch_cache.go:405] Replace watchCache (rev: 28903) 
I0814 08:40:09.876591  110468 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.877187  110468 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.877744  110468 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.878169  110468 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.879355  110468 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.879646  110468 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.879742  110468 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.879858  110468 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.880195  110468 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.880622  110468 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.880762  110468 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.881299  110468 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.881462  110468 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.881817  110468 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.881957  110468 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.882483  110468 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.882617  110468 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.882707  110468 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.882812  110468 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.882930  110468 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.883130  110468 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.883228  110468 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.883680  110468 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.883842  110468 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.884410  110468 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.884933  110468 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.885115  110468 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.885297  110468 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.885720  110468 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.885907  110468 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.886325  110468 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.886731  110468 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.887155  110468 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.887587  110468 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.887735  110468 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.887825  110468 master.go:423] Skipping disabled API group "auditregistration.k8s.io".
I0814 08:40:09.887844  110468 master.go:434] Enabling API group "authentication.k8s.io".
I0814 08:40:09.887860  110468 master.go:434] Enabling API group "authorization.k8s.io".
I0814 08:40:09.887975  110468 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.888085  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.888097  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.888126  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.888158  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.889369  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.889456  110468 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 08:40:09.889551  110468 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.889594  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.889601  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.889627  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.889675  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.889707  110468 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 08:40:09.889864  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.890160  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.890234  110468 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 08:40:09.890335  110468 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.890387  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.890394  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.890414  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.890444  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.890470  110468 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 08:40:09.890652  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.890831  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.890878  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.890896  110468 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0814 08:40:09.890911  110468 master.go:434] Enabling API group "autoscaling".
I0814 08:40:09.890920  110468 reflector.go:160] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0814 08:40:09.891004  110468 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.891073  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.891080  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.891103  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.891183  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.891423  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.891554  110468 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0814 08:40:09.891673  110468 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.891729  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.891739  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.891768  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.891882  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.891910  110468 reflector.go:160] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0814 08:40:09.892093  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.892415  110468 watch_cache.go:405] Replace watchCache (rev: 28904) 
I0814 08:40:09.892704  110468 watch_cache.go:405] Replace watchCache (rev: 28904) 
I0814 08:40:09.892726  110468 watch_cache.go:405] Replace watchCache (rev: 28904) 
I0814 08:40:09.892815  110468 watch_cache.go:405] Replace watchCache (rev: 28904) 
I0814 08:40:09.893294  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.893533  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.893689  110468 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0814 08:40:09.893714  110468 master.go:434] Enabling API group "batch".
I0814 08:40:09.893756  110468 reflector.go:160] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0814 08:40:09.894037  110468 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.894115  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.894124  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.894158  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.894390  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.894535  110468 watch_cache.go:405] Replace watchCache (rev: 28904) 
I0814 08:40:09.894694  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.894790  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.894895  110468 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0814 08:40:09.894951  110468 master.go:434] Enabling API group "certificates.k8s.io".
I0814 08:40:09.894994  110468 reflector.go:160] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0814 08:40:09.895864  110468 watch_cache.go:405] Replace watchCache (rev: 28904) 
I0814 08:40:09.896860  110468 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.897277  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.897289  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.897360  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.897429  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.903870  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.904118  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.904339  110468 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 08:40:09.904417  110468 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 08:40:09.905482  110468 watch_cache.go:405] Replace watchCache (rev: 28904) 
I0814 08:40:09.906357  110468 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.906535  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.906660  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.906767  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.906898  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.907685  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.907854  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.908109  110468 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0814 08:40:09.908224  110468 master.go:434] Enabling API group "coordination.k8s.io".
I0814 08:40:09.908132  110468 reflector.go:160] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0814 08:40:09.909474  110468 watch_cache.go:405] Replace watchCache (rev: 28904) 
I0814 08:40:09.910559  110468 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.910760  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.910895  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.911062  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.911240  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.911758  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.911843  110468 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 08:40:09.911857  110468 master.go:434] Enabling API group "extensions".
I0814 08:40:09.911945  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.911947  110468 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.911988  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.911994  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.912044  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.912124  110468 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 08:40:09.912141  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.916253  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.916365  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.916407  110468 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0814 08:40:09.916465  110468 watch_cache.go:405] Replace watchCache (rev: 28904) 
I0814 08:40:09.916560  110468 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.916620  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.916630  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.916663  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.916712  110468 reflector.go:160] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0814 08:40:09.916845  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.917157  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.917290  110468 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0814 08:40:09.917304  110468 master.go:434] Enabling API group "networking.k8s.io".
I0814 08:40:09.917330  110468 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.917387  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.917397  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.917432  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.917471  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.917496  110468 reflector.go:160] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0814 08:40:09.917671  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.918267  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.918992  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.920102  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.920183  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.920269  110468 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0814 08:40:09.920282  110468 master.go:434] Enabling API group "node.k8s.io".
I0814 08:40:09.920405  110468 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.920462  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.920472  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.920500  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.920535  110468 reflector.go:160] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0814 08:40:09.920686  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.920949  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.921035  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.921571  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.921609  110468 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0814 08:40:09.921702  110468 reflector.go:160] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0814 08:40:09.922274  110468 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.922347  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.922357  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.922388  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.922436  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.923296  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.923314  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.923397  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.923450  110468 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0814 08:40:09.923466  110468 master.go:434] Enabling API group "policy".
I0814 08:40:09.923496  110468 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.923556  110468 reflector.go:160] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0814 08:40:09.923559  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.923590  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.923618  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.923666  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.924115  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.924206  110468 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 08:40:09.924244  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.924319  110468 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.924333  110468 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 08:40:09.924393  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.924400  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.924418  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.924483  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.924644  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.924718  110468 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 08:40:09.924742  110468 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.924788  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.924795  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.924814  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.924883  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.925129  110468 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 08:40:09.925255  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.925385  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.925895  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.925921  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.925946  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.925994  110468 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 08:40:09.926047  110468 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 08:40:09.926157  110468 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.926228  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.926238  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.926265  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.926333  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.927032  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.927090  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.927119  110468 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 08:40:09.927155  110468 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.927158  110468 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 08:40:09.927213  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.927230  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.927262  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.927367  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.927462  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.927569  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.927625  110468 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0814 08:40:09.927680  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.927741  110468 reflector.go:160] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0814 08:40:09.927737  110468 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.927785  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.927793  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.927819  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.927905  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.928161  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.928241  110468 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0814 08:40:09.928267  110468 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.928321  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.928331  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.928358  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.928403  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.928429  110468 reflector.go:160] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0814 08:40:09.928587  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.928843  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.928883  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.928921  110468 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0814 08:40:09.929067  110468 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.929122  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.929131  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.929160  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.929200  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.929228  110468 reflector.go:160] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0814 08:40:09.929404  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.929647  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.929733  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.930220  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.930267  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.930451  110468 reflector.go:160] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0814 08:40:09.930531  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.929734  110468 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0814 08:40:09.931292  110468 master.go:434] Enabling API group "rbac.authorization.k8s.io".
I0814 08:40:09.931479  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.931506  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.933129  110468 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.933216  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.933287  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.933422  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.933467  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.933750  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.933882  110468 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 08:40:09.934070  110468 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.934118  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.934179  110468 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 08:40:09.934265  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.934275  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.934328  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.934368  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.934640  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.934753  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.934769  110468 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0814 08:40:09.934782  110468 master.go:434] Enabling API group "scheduling.k8s.io".
I0814 08:40:09.934905  110468 master.go:423] Skipping disabled API group "settings.k8s.io".
I0814 08:40:09.934912  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.935003  110468 reflector.go:160] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0814 08:40:09.935040  110468 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.935094  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.935103  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.935133  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.935179  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.935557  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.935589  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.935641  110468 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 08:40:09.935708  110468 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 08:40:09.935905  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.935975  110468 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.936256  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.936273  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.936304  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.936342  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.936537  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.936635  110468 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 08:40:09.936654  110468 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.936693  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.936699  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.936722  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.936751  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.936770  110468 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 08:40:09.936875  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.937248  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.937250  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.937302  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.937332  110468 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0814 08:40:09.937362  110468 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.937419  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.937428  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.937452  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.937666  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.937832  110468 reflector.go:160] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0814 08:40:09.938125  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.938459  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.938552  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.938602  110468 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0814 08:40:09.938664  110468 reflector.go:160] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0814 08:40:09.938744  110468 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.939387  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.939587  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.940354  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.940373  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.940405  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.940446  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.941148  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.941231  110468 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0814 08:40:09.941266  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.941308  110468 reflector.go:160] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0814 08:40:09.941343  110468 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.941404  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.941413  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.941441  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.941507  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.942679  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.942704  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.942816  110468 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0814 08:40:09.942829  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.942832  110468 master.go:434] Enabling API group "storage.k8s.io".
I0814 08:40:09.942850  110468 reflector.go:160] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0814 08:40:09.943093  110468 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.943178  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.943188  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.943263  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.943306  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.943684  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.943751  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.943995  110468 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0814 08:40:09.944149  110468 reflector.go:160] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0814 08:40:09.944348  110468 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.944424  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.944434  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.944528  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.944568  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.944781  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.944840  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.944861  110468 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0814 08:40:09.944949  110468 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.944998  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.945005  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.945049  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.945094  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.945129  110468 reflector.go:160] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0814 08:40:09.945151  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.945780  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.945800  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.945804  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.945908  110468 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0814 08:40:09.945960  110468 reflector.go:160] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0814 08:40:09.946126  110468 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.946191  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.946200  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.946227  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.946279  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.946489  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.946517  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.946692  110468 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0814 08:40:09.946747  110468 reflector.go:160] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0814 08:40:09.946948  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.947088  110468 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.947518  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.947745  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.947760  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.947839  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.947885  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.948189  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.948243  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.948292  110468 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0814 08:40:09.948306  110468 master.go:434] Enabling API group "apps".
I0814 08:40:09.948333  110468 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.948349  110468 reflector.go:160] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0814 08:40:09.948393  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.948402  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.948459  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.948540  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.948791  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.948879  110468 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 08:40:09.948905  110468 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.948944  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.948960  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.948969  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.948993  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.949000  110468 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 08:40:09.949114  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.949192  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.949700  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.949796  110468 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 08:40:09.949821  110468 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.949870  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.949878  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.949894  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.949895  110468 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 08:40:09.949902  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.950059  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.950178  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.950795  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.951386  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.951409  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.951457  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.951730  110468 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0814 08:40:09.951765  110468 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.951803  110468 reflector.go:160] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0814 08:40:09.951824  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.951834  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.951871  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.951942  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.952247  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.952350  110468 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0814 08:40:09.952362  110468 master.go:434] Enabling API group "admissionregistration.k8s.io".
I0814 08:40:09.952421  110468 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.952555  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.952565  110468 client.go:354] parsed scheme: ""
I0814 08:40:09.952576  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:09.952591  110468 reflector.go:160] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0814 08:40:09.952604  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:09.952650  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.952887  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:09.953042  110468 store.go:1342] Monitoring events count at <storage-prefix>//events
I0814 08:40:09.953059  110468 master.go:434] Enabling API group "events.k8s.io".
I0814 08:40:09.953250  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:09.953270  110468 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.953346  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.953351  110468 reflector.go:160] Listing and watching *core.Event from storage/cacher.go:/events
I0814 08:40:09.953465  110468 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.953639  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.953766  110468 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.953876  110468 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.953989  110468 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.954127  110468 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.954268  110468 watch_cache.go:405] Replace watchCache (rev: 28905) 
I0814 08:40:09.954339  110468 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.954442  110468 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.954550  110468 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.954669  110468 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.955577  110468 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.955920  110468 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.956987  110468 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.957484  110468 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.958501  110468 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.958781  110468 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.959544  110468 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.959829  110468 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.960707  110468 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.960910  110468 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:40:09.960949  110468 genericapiserver.go:390] Skipping API batch/v2alpha1 because it has no resources.
I0814 08:40:09.961701  110468 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.961828  110468 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.961980  110468 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.962661  110468 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.963416  110468 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.964152  110468 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.964394  110468 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.965250  110468 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.966076  110468 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.966285  110468 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.966880  110468 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:40:09.966982  110468 genericapiserver.go:390] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0814 08:40:09.967904  110468 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.968212  110468 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.968744  110468 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.969396  110468 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.969856  110468 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.970562  110468 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.971183  110468 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.971732  110468 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.972377  110468 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.973229  110468 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.973903  110468 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:40:09.974030  110468 genericapiserver.go:390] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0814 08:40:09.974507  110468 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.975209  110468 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:40:09.975315  110468 genericapiserver.go:390] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0814 08:40:09.975866  110468 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.976525  110468 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.976845  110468 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.977639  110468 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.978194  110468 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.978970  110468 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.979662  110468 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:40:09.979729  110468 genericapiserver.go:390] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0814 08:40:09.980724  110468 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.981489  110468 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.981726  110468 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.982610  110468 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.982810  110468 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.983070  110468 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.983660  110468 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.983884  110468 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.984134  110468 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.984939  110468 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.985258  110468 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.985599  110468 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0814 08:40:09.985664  110468 genericapiserver.go:390] Skipping API apps/v1beta2 because it has no resources.
W0814 08:40:09.985675  110468 genericapiserver.go:390] Skipping API apps/v1beta1 because it has no resources.
I0814 08:40:09.986438  110468 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.987137  110468 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.987648  110468 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.988308  110468 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.988971  110468 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ef0d6029-a41b-479d-91c2-3beae949db49", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0814 08:40:09.991623  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:09.991653  110468 healthz.go:169] healthz check poststarthook/bootstrap-controller failed: not finished
I0814 08:40:09.991665  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:09.991686  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:09.991700  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:09.991708  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:09.991770  110468 httplog.go:90] GET /healthz: (251.416µs) 0 [Go-http-client/1.1 127.0.0.1:48292]
I0814 08:40:09.992902  110468 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.397112ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48294]
I0814 08:40:09.995340  110468 httplog.go:90] GET /api/v1/services: (930.531µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48294]
I0814 08:40:09.998800  110468 httplog.go:90] GET /api/v1/services: (982.057µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48294]
I0814 08:40:10.000681  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.000721  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.000732  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.000743  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.000750  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.000783  110468 httplog.go:90] GET /healthz: (179.127µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48294]
I0814 08:40:10.002514  110468 httplog.go:90] GET /api/v1/services: (1.278903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48294]
I0814 08:40:10.002554  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.57584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48292]
I0814 08:40:10.002781  110468 httplog.go:90] GET /api/v1/services: (1.077563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.004322  110468 httplog.go:90] POST /api/v1/namespaces: (1.437293ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48294]
I0814 08:40:10.005816  110468 httplog.go:90] GET /api/v1/namespaces/kube-public: (975.175µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.007397  110468 httplog.go:90] POST /api/v1/namespaces: (1.242938ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.008498  110468 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (855.576µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.009824  110468 httplog.go:90] POST /api/v1/namespaces: (1.038412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.092456  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.092482  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.092491  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.092498  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.092504  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.092529  110468 httplog.go:90] GET /healthz: (173.118µs) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:10.101727  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.101756  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.101764  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.101770  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.101775  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.101798  110468 httplog.go:90] GET /healthz: (168.409µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.192426  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.192469  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.192479  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.192488  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.192494  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.192516  110468 httplog.go:90] GET /healthz: (227.427µs) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:10.201778  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.201811  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.201823  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.201832  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.201839  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.201866  110468 httplog.go:90] GET /healthz: (212.983µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.292524  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.292555  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.292565  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.292572  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.292586  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.292608  110468 httplog.go:90] GET /healthz: (190.493µs) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:10.301939  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.302128  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.302451  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.302526  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.302567  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.302709  110468 httplog.go:90] GET /healthz: (1.028859ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.392553  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.392588  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.392599  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.392609  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.392616  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.392655  110468 httplog.go:90] GET /healthz: (219.717µs) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:10.401666  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.401697  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.401710  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.401723  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.401730  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.401758  110468 httplog.go:90] GET /healthz: (166.73µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.492510  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.492550  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.492563  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.492574  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.492586  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.492634  110468 httplog.go:90] GET /healthz: (276.095µs) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:10.501834  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.501873  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.501885  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.501895  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.501903  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.501941  110468 httplog.go:90] GET /healthz: (253.057µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.592460  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.592498  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.592510  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.592521  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.592529  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.592579  110468 httplog.go:90] GET /healthz: (252.993µs) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:10.601701  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.601737  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.601749  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.601760  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.601767  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.601792  110468 httplog.go:90] GET /healthz: (224.809µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.692420  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.692449  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.692457  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.692464  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.692470  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.692491  110468 httplog.go:90] GET /healthz: (185.792µs) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:10.701741  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.701772  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.701784  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.701793  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.701801  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.701832  110468 httplog.go:90] GET /healthz: (216.419µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.792457  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.792485  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.792494  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.792500  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.792505  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.792535  110468 httplog.go:90] GET /healthz: (197.08µs) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:10.801735  110468 healthz.go:169] healthz check etcd failed: etcd client connection not yet established
I0814 08:40:10.801761  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.801770  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.801776  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.801781  110468 healthz.go:183] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.801801  110468 httplog.go:90] GET /healthz: (203.139µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.855463  110468 client.go:354] parsed scheme: ""
I0814 08:40:10.855490  110468 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:10.855537  110468 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:10.855604  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:10.856054  110468 balancer_conn_wrappers.go:131] clientv3/balancer: pin "127.0.0.1:2379"
I0814 08:40:10.856160  110468 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:10.893298  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.893331  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.893342  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.893350  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.893384  110468 httplog.go:90] GET /healthz: (1.033076ms) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:10.902539  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.902570  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.902581  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.902589  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.902620  110468 httplog.go:90] GET /healthz: (781.72µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.993361  110468 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.248867ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.993599  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.904189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48292]
I0814 08:40:10.995088  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.330471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:10.995092  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (967.27µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48292]
I0814 08:40:10.995184  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:10.995201  110468 healthz.go:169] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0814 08:40:10.995219  110468 healthz.go:169] healthz check poststarthook/ca-registration failed: not finished
I0814 08:40:10.995227  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0814 08:40:10.995252  110468 httplog.go:90] GET /healthz: (1.801433ms) 0 [Go-http-client/1.1 127.0.0.1:48312]
I0814 08:40:10.996807  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (969.833µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48292]
I0814 08:40:10.997277  110468 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.411083ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:10.997408  110468 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.221543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:10.997740  110468 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0814 08:40:10.998781  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.384626ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48292]
I0814 08:40:10.999408  110468 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.768188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:10.999707  110468 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.050144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.000179  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (977.05µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48292]
I0814 08:40:11.001406  110468 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.132158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.001581  110468 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0814 08:40:11.001607  110468 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0814 08:40:11.002556  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.002574  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.002594  110468 httplog.go:90] GET /healthz: (887.383µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.003202  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.084158ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48292]
I0814 08:40:11.004539  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (979.426µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.005834  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (986.279µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.006855  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (699.774µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.008008  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (700.084µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.009859  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.282199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.009998  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0814 08:40:11.011475  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.12832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.013083  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.184937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.013272  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0814 08:40:11.014915  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.471981ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.018267  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.920011ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.018378  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0814 08:40:11.019611  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.069094ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.021168  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.26738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.021531  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0814 08:40:11.022518  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (773.688µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.024287  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.465858ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.024943  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0814 08:40:11.026067  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (762.84µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.027485  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (969.177µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.027867  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0814 08:40:11.028947  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (845.993µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.030346  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.072814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.030533  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0814 08:40:11.031660  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (942.659µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.035344  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.761994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.035659  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0814 08:40:11.037002  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.12826ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.039189  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.729416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.039446  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0814 08:40:11.040545  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (773.006µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.042862  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.838319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.043218  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0814 08:40:11.044384  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (943.799µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.046172  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.307069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.046459  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0814 08:40:11.047467  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (823.338µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.049882  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.916001ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.050263  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0814 08:40:11.051688  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.184613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.053622  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.383163ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.053850  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0814 08:40:11.054754  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (698.369µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.056857  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.713626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.057062  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0814 08:40:11.058146  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (875.151µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.060040  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.476857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.060300  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0814 08:40:11.061510  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (965.77µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.062993  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.125994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.063415  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0814 08:40:11.064537  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (838.4µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.066112  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.237453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.066335  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0814 08:40:11.068748  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.8985ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.070932  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.551207ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.071183  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0814 08:40:11.072205  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (783.934µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.076832  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.195619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.077230  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 08:40:11.079185  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.756478ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.085233  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.540329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.085766  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0814 08:40:11.086782  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (848.016µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.089057  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.833909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.089255  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0814 08:40:11.090291  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (840.448µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.091820  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.216552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.092166  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0814 08:40:11.092964  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (654.005µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.093518  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.093544  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.093570  110468 httplog.go:90] GET /healthz: (1.206032ms) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:11.095170  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.199524ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.095352  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0814 08:40:11.096643  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (942.516µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.098504  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.363715ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.098779  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0814 08:40:11.099696  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (702.886µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.101501  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.416183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.101810  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0814 08:40:11.102559  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.102585  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.102614  110468 httplog.go:90] GET /healthz: (1.029977ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.103451  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.269564ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.106278  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.105326ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.106725  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0814 08:40:11.109132  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.925745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.111492  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.922319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.111873  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0814 08:40:11.112980  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (784.481µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.115526  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.036219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.115847  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 08:40:11.117236  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (956.118µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.119685  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.705297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.119856  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 08:40:11.120961  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (845.739µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.122483  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.062361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.122772  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 08:40:11.123732  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (750.439µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.125346  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.237447ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.125626  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 08:40:11.126934  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (690.189µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.128798  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.497114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.129161  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 08:40:11.130160  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (765.577µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.131771  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.230441ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.132153  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 08:40:11.133633  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.041159ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.136527  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.34473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.136743  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 08:40:11.137836  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (855.485µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.140045  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.618107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.140271  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 08:40:11.141191  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (735.471µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.143509  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.83597ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.143682  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 08:40:11.144841  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (834.64µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.147181  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.755346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.147511  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 08:40:11.148934  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.112382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.151036  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.375605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.151210  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0814 08:40:11.152240  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (840.541µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.153897  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.295395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.154267  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 08:40:11.155187  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (714.703µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.156972  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.361618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.157252  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0814 08:40:11.158379  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (904.413µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.160324  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.494172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.160531  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 08:40:11.161443  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (736.638µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.163302  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.426959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.163465  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 08:40:11.164596  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (784.851µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.166426  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.304984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.166667  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 08:40:11.167768  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (840.86µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.169488  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.229153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.169798  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 08:40:11.170875  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (863.599µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.172864  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.557909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.173144  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 08:40:11.174118  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (751.839µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.176031  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.494725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.176241  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0814 08:40:11.177428  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (828.288µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.179315  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.474475ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.179629  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 08:40:11.180859  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (941.641µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.182480  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.119451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.182731  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0814 08:40:11.183925  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (802.8µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.191671  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (7.127849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.192072  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 08:40:11.193756  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.193778  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.193808  110468 httplog.go:90] GET /healthz: (1.448154ms) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:11.193906  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.593809ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.198224  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.015881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.198500  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 08:40:11.199875  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.087129ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.202338  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.202513  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.202588  110468 httplog.go:90] GET /healthz: (1.050849ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.213940  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.595211ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.214207  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 08:40:11.232714  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (884.252µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.254226  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.290762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.254433  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 08:40:11.273301  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.182274ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.293235  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.293259  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.293293  110468 httplog.go:90] GET /healthz: (832.721µs) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:11.297917  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.975567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.298153  110468 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 08:40:11.303153  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.303177  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.303206  110468 httplog.go:90] GET /healthz: (914.928µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.313082  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (997.83µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.333444  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.575399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.333688  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0814 08:40:11.353510  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.713729ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.373748  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.722744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.373961  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0814 08:40:11.393450  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.590883ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.393784  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.393807  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.393864  110468 httplog.go:90] GET /healthz: (1.512673ms) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:11.402413  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.402451  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.402485  110468 httplog.go:90] GET /healthz: (866.005µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.413810  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.947766ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.414041  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0814 08:40:11.433113  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (964.719µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.454277  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.446747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.454534  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0814 08:40:11.473090  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.215711ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.493777  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.907797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.493976  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0814 08:40:11.494164  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.494183  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.494224  110468 httplog.go:90] GET /healthz: (1.900323ms) 0 [Go-http-client/1.1 127.0.0.1:48310]
I0814 08:40:11.502349  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.502388  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.502425  110468 httplog.go:90] GET /healthz: (840.063µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.513103  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.291497ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.533497  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.685118ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.533725  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0814 08:40:11.552874  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.018184ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.573459  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.635134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.573633  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0814 08:40:11.592839  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.004655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.592901  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.592923  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.592970  110468 httplog.go:90] GET /healthz: (666.063µs) 0 [Go-http-client/1.1 127.0.0.1:48296]
I0814 08:40:11.602289  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.602318  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.602360  110468 httplog.go:90] GET /healthz: (848.493µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.613189  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.426108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.613354  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0814 08:40:11.633001  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.050558ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.653132  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.317377ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.653309  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0814 08:40:11.672815  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (882.002µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.702713  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.702741  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.702774  110468 httplog.go:90] GET /healthz: (10.436799ms) 0 [Go-http-client/1.1 127.0.0.1:48310]
I0814 08:40:11.703351  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.703375  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.703404  110468 httplog.go:90] GET /healthz: (1.264834ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:11.703623  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (11.823612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48296]
I0814 08:40:11.704769  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0814 08:40:11.712620  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (841.916µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:11.733356  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.606804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:11.733580  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0814 08:40:11.752747  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (855.164µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:11.773370  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.552638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:11.773632  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0814 08:40:11.792672  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (868.106µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:11.794178  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.794208  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.794238  110468 httplog.go:90] GET /healthz: (1.33578ms) 0 [Go-http-client/1.1 127.0.0.1:48310]
I0814 08:40:11.802302  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.802438  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.802699  110468 httplog.go:90] GET /healthz: (1.101723ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.813900  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.113999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.814223  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0814 08:40:11.832850  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.005425ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.853105  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.294263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.853426  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0814 08:40:11.872692  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (901.854µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.893435  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.557296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:11.893694  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.893730  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.893749  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0814 08:40:11.893762  110468 httplog.go:90] GET /healthz: (1.445514ms) 0 [Go-http-client/1.1 127.0.0.1:48490]
I0814 08:40:11.902368  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.902403  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.902438  110468 httplog.go:90] GET /healthz: (841.97µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:11.912813  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (857.141µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:11.933561  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.753986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:11.933742  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0814 08:40:11.952989  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.123733ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:11.973496  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.647245ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:11.973808  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0814 08:40:11.993393  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.508456ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:11.993591  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:11.993618  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:11.993651  110468 httplog.go:90] GET /healthz: (1.352816ms) 0 [Go-http-client/1.1 127.0.0.1:48310]
I0814 08:40:12.002216  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.002252  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.002333  110468 httplog.go:90] GET /healthz: (816.731µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:12.013521  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.707533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:12.013789  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0814 08:40:12.035967  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.10785ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:12.053504  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.592623ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:12.053858  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0814 08:40:12.072804  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (899.59µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:12.093095  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.093158  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.093192  110468 httplog.go:90] GET /healthz: (874.656µs) 0 [Go-http-client/1.1 127.0.0.1:48490]
I0814 08:40:12.093585  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.757904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:12.093825  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0814 08:40:12.102328  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.102363  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.102391  110468 httplog.go:90] GET /healthz: (696.814µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:12.112762  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (992.822µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:12.134095  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.35178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:12.134269  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0814 08:40:12.152823  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.030694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:12.173566  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.68971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:12.173830  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0814 08:40:12.192681  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (822.139µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:12.193266  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.193486  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.193855  110468 httplog.go:90] GET /healthz: (1.579687ms) 0 [Go-http-client/1.1 127.0.0.1:48490]
I0814 08:40:12.202314  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.202357  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.202402  110468 httplog.go:90] GET /healthz: (870.467µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.213314  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.508944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.213506  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0814 08:40:12.232870  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.027232ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.253546  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.719288ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.253761  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0814 08:40:12.272868  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.057681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.292940  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.292963  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.293085  110468 httplog.go:90] GET /healthz: (722.275µs) 0 [Go-http-client/1.1 127.0.0.1:48310]
I0814 08:40:12.294104  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.093015ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.294425  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0814 08:40:12.302245  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.302398  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.302840  110468 httplog.go:90] GET /healthz: (1.299721ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.312706  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (872.628µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.333860  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.953959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.334136  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0814 08:40:12.352872  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.056643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.376796  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.427742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.377068  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0814 08:40:12.393003  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.393285  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.393533  110468 httplog.go:90] GET /healthz: (1.221996ms) 0 [Go-http-client/1.1 127.0.0.1:48310]
I0814 08:40:12.393393  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.175441ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.402575  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.402608  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.402645  110468 httplog.go:90] GET /healthz: (938.506µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.413529  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.664952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.413747  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0814 08:40:12.433063  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.127502ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.453876  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.949225ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.454283  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0814 08:40:12.473939  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.281099ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.493628  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.493664  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.493726  110468 httplog.go:90] GET /healthz: (1.433659ms) 0 [Go-http-client/1.1 127.0.0.1:48310]
I0814 08:40:12.493954  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.022469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.494240  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0814 08:40:12.502339  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.502366  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.502403  110468 httplog.go:90] GET /healthz: (783.323µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.513260  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.420379ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.533565  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.618442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.533838  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0814 08:40:12.553052  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.228189ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.574417  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.551903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.574634  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0814 08:40:12.593239  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.593275  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.593339  110468 httplog.go:90] GET /healthz: (925.229µs) 0 [Go-http-client/1.1 127.0.0.1:48490]
I0814 08:40:12.595116  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (801.139µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.602519  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.602555  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.602601  110468 httplog.go:90] GET /healthz: (942.826µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.613622  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.81273ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.613865  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0814 08:40:12.632947  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.093129ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.654008  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.106251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.654282  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0814 08:40:12.672847  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (990.543µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.693449  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.693504  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.693552  110468 httplog.go:90] GET /healthz: (1.193424ms) 0 [Go-http-client/1.1 127.0.0.1:48310]
I0814 08:40:12.693902  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.042189ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.694132  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0814 08:40:12.702490  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.702541  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.702573  110468 httplog.go:90] GET /healthz: (840.686µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.712712  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (911.876µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.733533  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.723573ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.733910  110468 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0814 08:40:12.752984  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.067697ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.755332  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.424969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.773666  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.76782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.773966  110468 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0814 08:40:12.792717  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (784.619µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.793147  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.793167  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.793209  110468 httplog.go:90] GET /healthz: (962.854µs) 0 [Go-http-client/1.1 127.0.0.1:48310]
I0814 08:40:12.794419  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.26218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.802403  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.802439  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.802473  110468 httplog.go:90] GET /healthz: (850.471µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.813277  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.45263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.813506  110468 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 08:40:12.832800  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (982.613µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.834572  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.353466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.853323  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.441344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.853549  110468 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 08:40:12.873000  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.111911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.874538  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (934.474µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.892990  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.893071  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.893099  110468 httplog.go:90] GET /healthz: (845.927µs) 0 [Go-http-client/1.1 127.0.0.1:48310]
I0814 08:40:12.893700  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.870238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.893953  110468 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 08:40:12.902323  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.902357  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.902387  110468 httplog.go:90] GET /healthz: (851.099µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.912918  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.077723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.914579  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.231881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.934114  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.213228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.934417  110468 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 08:40:12.953233  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.170328ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.954952  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.174664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.974237  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.032577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.974499  110468 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 08:40:12.992984  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.105588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:12.993352  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:12.993376  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:12.993408  110468 httplog.go:90] GET /healthz: (779.309µs) 0 [Go-http-client/1.1 127.0.0.1:48310]
I0814 08:40:12.994535  110468 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.149513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.002457  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:13.002490  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:13.002535  110468 httplog.go:90] GET /healthz: (787.784µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.013334  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.485629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.013573  110468 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 08:40:13.032920  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.025299ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.034524  110468 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.21117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.053429  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.615341ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.053676  110468 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0814 08:40:13.072911  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.010037ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.074586  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.167296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.093499  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:13.093532  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:13.093580  110468 httplog.go:90] GET /healthz: (1.302533ms) 0 [Go-http-client/1.1 127.0.0.1:48310]
I0814 08:40:13.094481  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.583566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.094675  110468 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0814 08:40:13.102425  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:13.102455  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:13.102489  110468 httplog.go:90] GET /healthz: (872.631µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.112913  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.067171ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.114533  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.113015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.134176  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.226159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.134413  110468 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0814 08:40:13.152816  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (972.071µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.154483  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.103387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.173752  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.857771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.174009  110468 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0814 08:40:13.195526  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:13.195559  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:13.195595  110468 httplog.go:90] GET /healthz: (3.278737ms) 0 [Go-http-client/1.1 127.0.0.1:48490]
I0814 08:40:13.195863  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.159885ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.197427  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.166378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.202572  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:13.202598  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:13.202633  110468 httplog.go:90] GET /healthz: (1.115662ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.213712  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.865157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.213944  110468 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0814 08:40:13.233324  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.440558ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.235193  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.163041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.253495  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.664887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.253770  110468 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0814 08:40:13.272933  110468 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.080408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.274671  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.242737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.293818  110468 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.012157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.294004  110468 healthz.go:169] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0814 08:40:13.294034  110468 healthz.go:183] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0814 08:40:13.294062  110468 httplog.go:90] GET /healthz: (1.821913ms) 0 [Go-http-client/1.1 127.0.0.1:48490]
I0814 08:40:13.294389  110468 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0814 08:40:13.302431  110468 httplog.go:90] GET /healthz: (727.979µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.303783  110468 httplog.go:90] GET /api/v1/namespaces/default: (1.126854ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.305781  110468 httplog.go:90] POST /api/v1/namespaces: (1.516616ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.307490  110468 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.347928ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.311513  110468 httplog.go:90] POST /api/v1/namespaces/default/services: (3.638047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.313330  110468 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.360381ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.315171  110468 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.392176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.396209  110468 httplog.go:90] GET /healthz: (910.72µs) 200 [Go-http-client/1.1 127.0.0.1:48310]
W0814 08:40:13.396769  110468 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:40:13.396781  110468 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:40:13.396800  110468 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:40:13.396812  110468 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:40:13.396820  110468 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:40:13.396825  110468 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:40:13.396833  110468 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:40:13.396841  110468 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:40:13.396846  110468 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:40:13.396882  110468 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0814 08:40:13.396898  110468 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 08:40:13.396917  110468 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0814 08:40:13.396924  110468 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0814 08:40:13.397492  110468 reflector.go:122] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.397516  110468 reflector.go:160] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.397769  110468 reflector.go:122] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.397778  110468 reflector.go:160] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.398000  110468 reflector.go:122] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.398010  110468 reflector.go:160] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.398411  110468 reflector.go:122] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.398423  110468 reflector.go:160] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.398755  110468 reflector.go:122] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.398768  110468 reflector.go:160] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.399127  110468 reflector.go:122] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.399142  110468 reflector.go:160] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.399448  110468 reflector.go:122] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.399455  110468 reflector.go:160] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.399682  110468 reflector.go:122] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.399691  110468 reflector.go:160] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.400830  110468 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (849.767µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48528]
I0814 08:40:13.400976  110468 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (490.692µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48538]
I0814 08:40:13.401257  110468 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (415.801µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48536]
I0814 08:40:13.401380  110468 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (447.142µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48530]
I0814 08:40:13.401622  110468 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=28905 labels= fields= timeout=6m12s
I0814 08:40:13.402790  110468 reflector.go:122] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.402818  110468 reflector.go:160] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.403344  110468 get.go:250] Starting watch for /api/v1/pods, rv=28903 labels= fields= timeout=6m22s
I0814 08:40:13.403532  110468 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (2.066709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48532]
I0814 08:40:13.403655  110468 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (3.452463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:40:13.403693  110468 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (383.91µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48540]
I0814 08:40:13.403829  110468 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (336.125µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48534]
I0814 08:40:13.404129  110468 get.go:250] Starting watch for /api/v1/nodes, rv=28903 labels= fields= timeout=5m55s
I0814 08:40:13.404235  110468 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (4.269334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:40:13.404387  110468 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=28905 labels= fields= timeout=7m24s
I0814 08:40:13.404443  110468 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=28905 labels= fields= timeout=9m44s
I0814 08:40:13.404530  110468 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=28903 labels= fields= timeout=8m16s
I0814 08:40:13.404947  110468 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=28905 labels= fields= timeout=8m43s
I0814 08:40:13.404973  110468 reflector.go:122] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.405003  110468 reflector.go:160] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.405161  110468 get.go:250] Starting watch for /api/v1/services, rv=29348 labels= fields= timeout=6m57s
I0814 08:40:13.405789  110468 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (415.054µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48542]
I0814 08:40:13.405962  110468 reflector.go:122] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.405975  110468 reflector.go:160] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:133
I0814 08:40:13.406549  110468 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (372.062µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48542]
I0814 08:40:13.406772  110468 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=28903 labels= fields= timeout=8m43s
I0814 08:40:13.407099  110468 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=28903 labels= fields= timeout=5m45s
I0814 08:40:13.407418  110468 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=28905 labels= fields= timeout=5m50s
I0814 08:40:13.497500  110468 shared_informer.go:211] caches populated
I0814 08:40:13.597777  110468 shared_informer.go:211] caches populated
I0814 08:40:13.697976  110468 shared_informer.go:211] caches populated
I0814 08:40:13.798199  110468 shared_informer.go:211] caches populated
I0814 08:40:13.898438  110468 shared_informer.go:211] caches populated
I0814 08:40:13.998656  110468 shared_informer.go:211] caches populated
I0814 08:40:14.098883  110468 shared_informer.go:211] caches populated
I0814 08:40:14.199090  110468 shared_informer.go:211] caches populated
I0814 08:40:14.299311  110468 shared_informer.go:211] caches populated
I0814 08:40:14.399487  110468 shared_informer.go:211] caches populated
I0814 08:40:14.401542  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:14.401691  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:14.401872  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:14.403115  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:14.404269  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:14.404973  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:14.406396  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:14.499710  110468 shared_informer.go:211] caches populated
I0814 08:40:14.599909  110468 shared_informer.go:211] caches populated
I0814 08:40:14.602488  110468 httplog.go:90] POST /api/v1/nodes: (1.9258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:14.603303  110468 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0814 08:40:14.605533  110468 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods: (1.876433ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:14.605876  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/waiting-pod
I0814 08:40:14.605897  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/waiting-pod
I0814 08:40:14.606073  110468 scheduler_binder.go:256] AssumePodVolumes for pod "preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/waiting-pod", node "test-node-0"
I0814 08:40:14.606100  110468 scheduler_binder.go:266] AssumePodVolumes for pod "preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/waiting-pod", node "test-node-0": all PVCs bound and nothing to do
I0814 08:40:14.606144  110468 framework.go:558] waiting for 30s for pod "waiting-pod" at permit
I0814 08:40:14.613144  110468 factory.go:615] Attempting to bind signalling-pod to test-node-0
I0814 08:40:14.613470  110468 factory.go:615] Attempting to bind waiting-pod to test-node-0
I0814 08:40:14.613570  110468 scheduler.go:447] Failed to bind pod: permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod
E0814 08:40:14.613595  110468 scheduler.go:449] scheduler cache ForgetPod failed: pod c8b461dc-8d2a-4223-ae96-5ddbb6695064 wasn't assumed so cannot be forgotten
E0814 08:40:14.613610  110468 scheduler.go:605] error binding pod: Post http://127.0.0.1:41169/api/v1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/pods/signalling-pod/binding: dial tcp 127.0.0.1:41169: connect: connection refused
E0814 08:40:14.613632  110468 factory.go:566] Error scheduling permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod: Post http://127.0.0.1:41169/api/v1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/pods/signalling-pod/binding: dial tcp 127.0.0.1:41169: connect: connection refused; retrying
I0814 08:40:14.613666  110468 factory.go:624] Updating pod condition for permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod to (PodScheduled==False, Reason=SchedulerError)
E0814 08:40:14.613973  110468 scheduler.go:280] Error updating the condition of the pod permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod: Put http://127.0.0.1:41169/api/v1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/pods/signalling-pod/status: dial tcp 127.0.0.1:41169: connect: connection refused
E0814 08:40:14.614136  110468 factory.go:599] Error getting pod permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod for retry: Get http://127.0.0.1:41169/api/v1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/pods/signalling-pod: dial tcp 127.0.0.1:41169: connect: connection refused; retrying...
E0814 08:40:14.614354  110468 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:41169/apis/events.k8s.io/v1beta1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/events: dial tcp 127.0.0.1:41169: connect: connection refused' (may retry after sleeping)
I0814 08:40:14.615493  110468 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/waiting-pod/binding: (1.824028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:14.615839  110468 scheduler.go:614] pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/waiting-pod is bound successfully on node "test-node-0", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<500m>|Memory<500>|Pods<32>|StorageEphemeral<0>.".
I0814 08:40:14.617648  110468 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/events: (1.41457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
E0814 08:40:14.814626  110468 factory.go:599] Error getting pod permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod for retry: Get http://127.0.0.1:41169/api/v1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/pods/signalling-pod: dial tcp 127.0.0.1:41169: connect: connection refused; retrying...
E0814 08:40:15.215296  110468 factory.go:599] Error getting pod permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod for retry: Get http://127.0.0.1:41169/api/v1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/pods/signalling-pod: dial tcp 127.0.0.1:41169: connect: connection refused; retrying...
I0814 08:40:15.401730  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:15.401836  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:15.402037  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:15.403351  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:15.404340  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:15.405130  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:15.406555  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 08:40:16.015853  110468 factory.go:599] Error getting pod permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod for retry: Get http://127.0.0.1:41169/api/v1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/pods/signalling-pod: dial tcp 127.0.0.1:41169: connect: connection refused; retrying...
I0814 08:40:16.401872  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:16.401976  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:16.402251  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:16.403563  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:16.404448  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:16.405248  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:16.406734  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:17.402071  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:17.402147  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:17.402372  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:17.403718  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:17.404574  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:17.405362  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:17.406846  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 08:40:17.616503  110468 factory.go:599] Error getting pod permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod for retry: Get http://127.0.0.1:41169/api/v1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/pods/signalling-pod: dial tcp 127.0.0.1:41169: connect: connection refused; retrying...
I0814 08:40:18.402279  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:18.402321  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:18.402509  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:18.403867  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:18.404720  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:18.405525  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:18.406983  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:19.402476  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:19.402589  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:19.402733  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:19.404226  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:19.404836  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:19.405649  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:19.407088  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:20.402708  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:20.402753  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:20.402942  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:20.404642  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:20.405170  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:20.406462  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:20.407240  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 08:40:20.817217  110468 factory.go:599] Error getting pod permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod for retry: Get http://127.0.0.1:41169/api/v1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/pods/signalling-pod: dial tcp 127.0.0.1:41169: connect: connection refused; retrying...
I0814 08:40:21.402873  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:21.402898  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:21.403069  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:21.405060  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:21.405827  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:21.406723  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:21.407390  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:22.403095  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:22.403215  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:22.403230  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:22.405245  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:22.406028  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:22.406875  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:22.408158  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:23.304512  110468 httplog.go:90] GET /api/v1/namespaces/default: (1.53661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:23.306094  110468 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.149507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:23.307447  110468 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (987.43µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:23.403283  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:23.403340  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:23.403447  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:23.405461  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:23.406220  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:23.407094  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:23.408328  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:24.403483  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:24.403485  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:24.403657  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:24.405636  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:24.406378  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:24.407218  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:24.408472  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:25.403701  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:25.403732  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:25.403782  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:25.405831  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:25.406493  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:25.407389  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:25.408663  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 08:40:25.770577  110468 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:41169/apis/events.k8s.io/v1beta1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/events: dial tcp 127.0.0.1:41169: connect: connection refused' (may retry after sleeping)
I0814 08:40:26.403893  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:26.403893  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:26.403899  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:26.405968  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:26.406784  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:26.407514  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:26.408771  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 08:40:27.217830  110468 factory.go:599] Error getting pod permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod for retry: Get http://127.0.0.1:41169/api/v1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/pods/signalling-pod: dial tcp 127.0.0.1:41169: connect: connection refused; retrying...
I0814 08:40:27.404082  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:27.404105  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:27.404089  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:27.406134  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:27.406931  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:27.407646  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:27.408896  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:28.404275  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:28.404275  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:28.404275  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:28.406279  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:28.407118  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:28.407763  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:28.409044  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:29.404462  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:29.404506  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:29.404590  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:29.406518  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:29.407269  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:29.407896  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:29.409259  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:30.404652  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:30.404708  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:30.404819  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:30.406696  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:30.407397  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:30.408049  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:30.409425  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:31.404959  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:31.405102  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:31.405172  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:31.406851  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:31.407560  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:31.408162  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:31.409534  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:32.405147  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:32.405366  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:32.405402  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:32.407006  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:32.407746  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:32.408334  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:32.409698  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:33.304320  110468 httplog.go:90] GET /api/v1/namespaces/default: (1.294308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:33.307042  110468 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.378169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:33.309259  110468 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.774547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:33.405326  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:33.405456  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:33.405540  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:33.407252  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:33.407893  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:33.408465  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:33.411213  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:34.405550  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:34.405649  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:34.405757  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:34.407430  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:34.408074  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:34.408629  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:34.411397  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:35.405764  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:35.405867  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:35.405894  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:35.407594  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:35.408209  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:35.408774  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:35.411536  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:36.405913  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:36.406000  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:36.406113  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:36.407758  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:36.408363  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:36.409309  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:36.411677  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 08:40:37.167578  110468 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:41169/apis/events.k8s.io/v1beta1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/events: dial tcp 127.0.0.1:41169: connect: connection refused' (may retry after sleeping)
I0814 08:40:37.406060  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:37.406275  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:37.407003  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:37.407902  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:37.408513  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:37.409451  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:37.411823  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:38.406196  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:38.406462  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:38.407195  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:38.408108  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:38.408660  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:38.409597  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:38.411965  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:39.406316  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:39.406565  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:39.407334  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:39.408219  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:39.408786  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:39.409815  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:39.412089  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
E0814 08:40:40.018765  110468 factory.go:599] Error getting pod permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod for retry: Get http://127.0.0.1:41169/api/v1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/pods/signalling-pod: dial tcp 127.0.0.1:41169: connect: connection refused; retrying...
I0814 08:40:40.406388  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:40.406720  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:40.407459  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:40.408364  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:40.408925  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:40.409957  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:40.412266  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:41.406548  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:41.406873  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:41.407603  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:41.408617  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:41.409177  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:41.410166  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:41.412438  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:42.406676  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:42.406987  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:42.407802  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:42.408679  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:42.409357  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:42.410332  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:42.412601  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:43.304480  110468 httplog.go:90] GET /api/v1/namespaces/default: (1.274753ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:43.306388  110468 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.156505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:43.307664  110468 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (953.362µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:43.406812  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:43.407155  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:43.408159  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:43.408798  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:43.409509  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:43.410498  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:43.412754  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:44.406949  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:44.407363  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:44.408344  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:44.408949  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:44.409679  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:44.410644  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:44.412909  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:44.608488  110468 httplog.go:90] POST /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods: (2.014758ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:44.608806  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:44.608829  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:44.608952  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:44.609032  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:44.610493  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.027176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:44.611101  110468 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/events: (1.367935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:44.611237  110468 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod/status: (1.902637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48670]
I0814 08:40:44.612614  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (920.272µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:44.612878  110468 generic_scheduler.go:1193] Node test-node-0 is a potential node for preemption.
I0814 08:40:44.615050  110468 httplog.go:90] PUT /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod/status: (1.734793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:44.617481  110468 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/waiting-pod: (1.91936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:44.619480  110468 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/events: (1.577849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:44.710901  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.315621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:44.811186  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.671248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:44.911080  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.527655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:45.010979  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.484284ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:45.113078  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (3.560963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:45.211068  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.595558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:45.311086  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.421141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:45.407134  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:45.407458  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:45.408510  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:45.409105  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:45.409861  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:45.410790  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:45.410925  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.399576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:45.413067  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:45.511212  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.677163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:45.610970  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.458423ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:45.710987  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.336417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:45.810755  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.22752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:45.910935  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.428976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:46.011234  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.764007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:46.110917  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.444621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:46.211115  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.578038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:46.310962  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.507263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:46.401644  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:46.401681  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:46.401856  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:46.401916  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:46.404743  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.553135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:46.404743  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.581716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:46.405825  110468 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/events: (2.045693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52578]
I0814 08:40:46.407278  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:46.407572  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:46.408641  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:46.409284  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:46.409933  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:46.410767  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.379594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:46.411081  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:46.413268  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:46.511262  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.705989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:46.610989  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.534297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:46.711152  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.64099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:46.811293  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.789605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:46.911178  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.664803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:47.010942  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.444044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:47.110845  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.428859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:47.210889  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.459297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:47.311096  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.582479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:47.407433  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:47.407715  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:47.408803  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:47.408900  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:47.408912  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:47.409053  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:47.409102  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:47.409943  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:47.410069  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:47.410917  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.546056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:47.410921  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.545494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:47.411600  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.617152ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52584]
I0814 08:40:47.411690  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:47.413395  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:47.413557  110468 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/events/preemptor-pod.15babd6161056b25: (3.137161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52586]
I0814 08:40:47.510988  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.506371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:47.610907  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.454317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:47.710959  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.444524ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:47.816863  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.538169ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:47.913558  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (4.087931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:48.010969  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.466892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:48.110978  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.523679ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:48.211205  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.715128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:48.311197  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.554475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:48.407606  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:48.407837  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:48.409007  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:48.409249  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:48.409275  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:48.409456  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:48.409504  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:48.410145  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:48.410273  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:48.410970  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.476466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:48.411422  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.266205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52590]
I0814 08:40:48.411441  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:48.411862  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:48.413542  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:48.511113  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.626101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:48.610866  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.367172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:48.711178  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.617015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:48.812511  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.883969ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:48.911548  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.029302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:49.012150  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.348194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
E0814 08:40:49.017829  110468 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:41169/apis/events.k8s.io/v1beta1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/events: dial tcp 127.0.0.1:41169: connect: connection refused' (may retry after sleeping)
I0814 08:40:49.111260  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.818077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:49.210881  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.49154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:49.311074  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.614023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:49.407803  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:49.407976  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:49.409253  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:49.409590  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:49.409613  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:49.409800  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:49.409847  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:49.410359  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:49.410676  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:49.411316  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.600913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:49.411541  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.429968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:49.411949  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.505401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52872]
I0814 08:40:49.412157  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:49.413712  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:49.512939  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (3.418496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:49.610788  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.335727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:49.711073  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.568346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:49.811464  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.924481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:49.911573  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.86058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:50.011011  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.568839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:50.111844  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.419965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:50.211179  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.678938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:50.311096  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.612972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:50.407929  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:50.408141  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:50.409398  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:50.409535  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:50.409554  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:50.409671  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:50.409719  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:50.410529  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:50.411091  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.635526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:50.411230  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.331454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:50.411544  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.218633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53020]
I0814 08:40:50.411555  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:50.412261  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:50.413844  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:50.510989  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.48044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:50.610857  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.417345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:50.710902  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.463236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:50.812653  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.490761ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:50.910651  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.232098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:51.010958  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.449269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:51.110846  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.369194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:51.210859  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.365936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:51.311874  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.389551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:51.408103  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:51.408263  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:51.409543  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:51.409684  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:51.409697  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:51.409814  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:51.409864  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:51.410856  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:51.411144  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.665494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:51.411727  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:51.412378  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:51.412885  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.265758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53040]
I0814 08:40:51.413234  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.997856ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:51.413992  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:51.510722  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.286818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:51.610975  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.505248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:51.712010  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.537168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:51.811222  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.753322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:51.911007  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.554397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:52.011108  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.609803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:52.111501  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.022172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:52.210964  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.51518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:52.311391  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.032037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:52.408250  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:52.408417  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:52.410194  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:52.410396  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:52.410430  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:52.410596  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:52.410658  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:52.411593  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:52.411963  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:52.412582  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.670669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:52.412599  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (3.099551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:52.412766  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:52.413413  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.139921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53048]
I0814 08:40:52.414228  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:52.511262  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.749232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:52.611374  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.901255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:52.711091  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.579953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:52.811240  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.69873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:52.911336  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.813945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:53.010884  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.437858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:53.111327  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.805215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:53.211814  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.799638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:53.304804  110468 httplog.go:90] GET /api/v1/namespaces/default: (1.378059ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:53.306711  110468 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.39977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:53.308248  110468 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (877.346µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:53.310507  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.055421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:53.408438  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:53.408632  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:53.410378  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:53.410529  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:53.410595  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:53.410800  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:53.410930  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:53.411728  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:53.412117  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:53.412991  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (3.489292ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:53.413551  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.059432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:53.413845  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:53.413869  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (881.255µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53130]
I0814 08:40:53.414359  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:53.511441  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.958618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:53.611351  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.894405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:53.711333  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.853522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:53.810978  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.494002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:53.911665  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.138612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:54.011110  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.575694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:54.110856  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.420743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:54.213385  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (3.886712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:54.311375  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.87101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:54.408609  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:54.408815  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:54.410546  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:54.410744  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:54.410761  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:54.410882  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:54.410938  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:54.411984  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:54.412176  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:54.412311  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.828835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:54.412442  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.233315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:54.412687  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.176927ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53336]
I0814 08:40:54.413979  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:54.414512  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:54.511156  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.448199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:54.610908  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.390378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:54.710950  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.428909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:54.811241  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.727943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:54.911485  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.957694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:55.011265  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.768197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:55.111620  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.133215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:55.211644  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.134635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:55.311279  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.707859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:55.408786  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:55.408984  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:55.410802  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:55.410912  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:55.410924  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:55.411092  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:55.411147  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:55.412215  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:55.412799  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:55.413479  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (3.908146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:55.414078  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:55.414332  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.967624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:55.414420  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.134023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:53476]
I0814 08:40:55.414679  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:55.512104  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.563691ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:55.611283  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.781254ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:55.710965  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.440225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:55.810880  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.411849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:55.911525  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.927418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:56.010901  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.339682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:56.110995  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.476617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:56.210959  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.478909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:56.310797  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.368064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:56.408909  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:56.409128  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:56.410655  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.282588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:56.411098  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:56.411202  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:56.411216  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:56.411333  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:56.411379  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:56.412372  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:56.412931  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:56.413196  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.49123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:56.413524  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.7422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:56.414282  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:56.414816  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:56.510868  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.327833ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:56.610836  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.32584ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:56.711001  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.390517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:56.811226  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.699237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:56.911449  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.974775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:57.010898  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.422662ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:57.111005  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.469921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:57.211160  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.610055ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:57.311173  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.690374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:57.409148  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:57.409205  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:57.410962  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.458494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:57.411492  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:57.411670  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:57.411686  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:57.411811  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:57.411879  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:57.412526  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:57.413091  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:57.413538  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.253773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:57.414077  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.965163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:57.414470  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:57.414963  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:57.510888  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.373682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:57.611997  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.464111ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:57.710862  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.345926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:57.811485  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.957696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:57.911188  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.596102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:58.011191  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.713522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:58.111236  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.742518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:58.211047  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.538776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:58.310994  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.593082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:58.409477  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:58.409511  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:58.410758  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.248711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:58.411664  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:58.411807  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:58.411833  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:58.411973  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:58.412086  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:58.412969  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:58.413247  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:58.413917  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.34462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:58.414626  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.11397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:58.414767  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:58.415102  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:58.512672  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (3.292628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:58.611115  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.595536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:58.712382  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.812346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:58.811111  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.276483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:58.912551  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.885405ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:59.011196  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.718881ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:59.111650  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.105744ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:59.211253  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.660044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:59.310878  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.381244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:59.409635  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:59.409636  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:59.411059  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.572197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:59.411852  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:59.411998  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:59.412032  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:40:59.412199  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:40:59.412285  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:40:59.413549  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.011862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:40:59.413795  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:59.413867  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:59.413958  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.417183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:59.414882  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:59.415262  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:40:59.510999  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.499231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:59.610910  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.496921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:59.710947  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.502598ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:59.810882  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.378417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:40:59.911751  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.262822ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:00.010749  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.286357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:00.111990  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.493552ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:00.210892  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.349264ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:00.311387  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.86368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:00.410324  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:00.410365  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:00.411614  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.085165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:00.412089  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:00.412264  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:00.412279  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:00.412395  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:00.412433  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:00.413953  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:00.413979  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:00.414939  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.55732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:00.415282  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:00.415402  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:00.416767  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (3.817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
E0814 08:41:00.455146  110468 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:41169/apis/events.k8s.io/v1beta1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/events: dial tcp 127.0.0.1:41169: connect: connection refused' (may retry after sleeping)
I0814 08:41:00.510866  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.40597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:00.610738  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.281863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:00.710971  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.471032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:00.812418  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.883938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:00.912159  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.040123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:01.011174  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.631403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:01.111309  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.758434ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:01.211278  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.659548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:01.310808  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.377733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:01.410425  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:01.410454  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:01.411475  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.562353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:01.412327  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:01.412501  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:01.412539  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:01.412662  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:01.412724  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:01.414121  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:01.414166  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:01.414417  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.490486ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:01.415267  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.171723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:01.415418  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:01.415539  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:01.511141  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.683805ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:01.611869  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.594248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:01.717591  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (5.441706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:01.810813  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.334832ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:01.911346  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.703098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:02.011179  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.641126ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:02.111098  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.536161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:02.211184  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.699467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:02.310925  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.460081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:02.410584  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:02.411139  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.676669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:02.411291  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:02.412504  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:02.412649  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:02.412673  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:02.412811  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:02.412868  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:02.414430  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.299102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:02.414513  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.305037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:02.414539  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:02.414548  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:02.415532  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:02.415682  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:02.510979  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.508146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:02.613038  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (3.336419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:02.711542  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.994412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:02.811004  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.379474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:02.912492  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.933816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:03.011219  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.71698ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:03.110949  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.500767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:03.211290  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.779217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:03.304779  110468 httplog.go:90] GET /api/v1/namespaces/default: (1.241363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:03.306297  110468 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.149334ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:03.307641  110468 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.012227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:03.310269  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (980.176µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:03.410712  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:03.411167  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.621409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:03.411564  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:03.412654  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:03.412763  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:03.412773  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:03.412874  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:03.412904  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:03.414675  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:03.414697  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:03.415190  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.209492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:03.415190  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.067536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:03.415685  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:03.415843  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:03.511093  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.405864ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:03.611746  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.278139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:03.711145  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.621333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:03.811193  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.71895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:03.911001  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.432897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:04.010802  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.355014ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:04.110812  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.302531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:04.210781  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.367669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:04.310861  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.335814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:04.411084  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.511466ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:04.411269  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:04.411730  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:04.412815  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:04.412935  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:04.412946  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:04.413137  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:04.413180  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:04.414616  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.250981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:04.414626  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.247903ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:04.414741  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:04.414857  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:04.415847  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:04.416030  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:04.510998  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.525328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:04.611507  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.486135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:04.711685  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.059095ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:04.810927  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.486721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:04.911310  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.856062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:05.010982  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.504355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:05.111406  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.936175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:05.210887  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.442897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:05.310768  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.332589ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:05.410906  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.436934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:05.411387  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:05.411912  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:05.413009  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:05.413124  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:05.413140  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:05.413278  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:05.413320  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:05.414658  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.024597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:05.414659  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.145591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:05.414879  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:05.414981  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:05.415988  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:05.416189  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:05.510755  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.311977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:05.611131  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.63682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
E0814 08:41:05.619428  110468 factory.go:599] Error getting pod permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/signalling-pod for retry: Get http://127.0.0.1:41169/api/v1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/pods/signalling-pod: dial tcp 127.0.0.1:41169: connect: connection refused; retrying...
I0814 08:41:05.711666  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.108517ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:05.810997  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.461228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:05.911425  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.913081ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:06.010927  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.437371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:06.110848  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.356164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:06.210881  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.430317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:06.311256  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.735082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:06.410732  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.247556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:06.411508  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:06.412088  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:06.413117  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:06.413224  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:06.413233  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:06.413337  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:06.413367  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:06.415001  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:06.415187  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:06.415213  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.487597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:06.415217  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.638347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:06.416228  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:06.416337  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:06.510961  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.489756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:06.610872  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.479641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:06.711111  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.58789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:06.811491  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.944769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:06.919419  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (6.639178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:07.012212  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.701857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:07.114462  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.397197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:07.211590  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.664763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:07.311104  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.575232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:07.410898  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.400283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:07.412037  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:07.412253  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:07.413570  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:07.413692  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:07.413712  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:07.413838  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:07.413878  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:07.415580  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:07.415658  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:07.416133  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.603729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:07.416448  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.921646ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:07.416735  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:07.417222  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:07.511487  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.454577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:07.610882  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.416588ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:07.710821  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.331457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:07.810884  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.412236ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:07.911051  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.496442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:08.011219  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.703148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:08.111190  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.733098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:08.211753  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.947467ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:08.311349  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.888767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:08.410791  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.335759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:08.412242  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:08.412368  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:08.413724  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:08.413846  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:08.413858  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:08.413975  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:08.414030  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:08.416092  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.173473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:08.416401  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.977275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:08.416852  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:08.416866  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:08.416916  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:08.417370  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:08.510985  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.512743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:08.611068  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.677054ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:08.711288  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.70618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:08.811235  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.620303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:08.911224  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.78365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:09.011550  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.893013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:09.110936  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.47231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:09.213263  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (3.676917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:09.310868  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.389966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:09.410861  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.435652ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:09.412359  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:09.412497  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:09.414176  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:09.414284  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:09.414293  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:09.414384  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:09.414413  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:09.416156  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.160024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:09.416162  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.499429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:09.416949  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:09.416959  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:09.417084  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:09.417459  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:09.510838  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.403757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:09.610892  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.410723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:09.710999  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.535384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:09.811750  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.523883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:09.911686  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.199608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:10.010855  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.339705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:10.011482  110468 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.010464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.012716  110468 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.003478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.013867  110468 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (880.725µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.111073  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.597576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.211188  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.711522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.311473  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.773191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.410930  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.412435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.412518  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:10.412640  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:10.414351  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:10.414498  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:10.414540  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:10.414694  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:10.414751  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:10.416547  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.245603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:10.416573  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.122489ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.417139  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:10.417217  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:10.417228  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:10.417346  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:10.417387  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:10.417444  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:10.418184  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:10.418209  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:10.419613  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.209195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:10.420132  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.38661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.511185  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.685683ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.611082  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.590528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.711152  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.677942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.811237  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.486017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:10.911316  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.837335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:11.010899  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.417001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:11.111426  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.894768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:11.217167  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (4.320986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:11.311651  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.972177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:11.411786  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.236246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:11.412691  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:11.412772  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:11.414561  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:11.414697  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:11.414711  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:11.414826  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:11.414867  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:11.417806  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:11.418159  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.41201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:11.418159  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.495028ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:11.419832  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:11.419843  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:11.420163  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:11.514440  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (4.866501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:11.612281  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.63839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:11.711898  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.755172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:11.813275  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.967573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:11.911255  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.539437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:12.011497  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.950256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:12.111707  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.161515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:12.211665  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.103397ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:12.311391  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.857305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
E0814 08:41:12.324874  110468 event_broadcaster.go:242] Unable to write event: 'Post http://127.0.0.1:41169/apis/events.k8s.io/v1beta1/namespaces/permit-plugindaf70ffb-42ba-4dde-b941-52915d4a901c/events: dial tcp 127.0.0.1:41169: connect: connection refused' (may retry after sleeping)
I0814 08:41:12.405787  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:12.405820  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:12.405968  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:12.406003  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:12.409088  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.49164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:12.409378  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.774665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:12.412058  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.135917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:12.412813  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:12.413278  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:12.414977  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:12.417953  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:12.419950  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:12.419956  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:12.420251  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:12.511596  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.011518ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:12.611519  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.984311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:12.711566  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.953184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:12.811755  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.121026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:12.911411  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.862218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:13.012159  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.481159ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:13.111854  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.289199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:13.211875  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.302668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:13.305543  110468 httplog.go:90] GET /api/v1/namespaces/default: (1.865186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:13.307790  110468 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.5071ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:13.310090  110468 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.692007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:13.311528  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.072084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:13.411591  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.014093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:13.412978  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:13.413432  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:13.415287  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:13.415461  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:13.415486  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:13.415654  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:13.415717  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:13.417682  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.452483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:13.417682  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.703655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:13.418111  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:13.420158  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:13.420169  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:13.420363  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:13.511803  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.200252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:13.612219  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.60391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:13.711331  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.718638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:13.811579  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.001798ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:13.911553  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.936103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.019703  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (7.404954ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.111353  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.679668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.211850  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.240982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.311923  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.292704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.411813  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.244659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.413169  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:14.413582  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:14.415489  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:14.415648  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:14.415662  110468 scheduler.go:477] Attempting to schedule pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:14.415830  110468 factory.go:550] Unable to schedule preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu, 1 Insufficient memory.; waiting
I0814 08:41:14.415885  110468 factory.go:624] Updating pod condition for preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0814 08:41:14.417934  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.686157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:14.418279  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:14.418946  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.702756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.420302  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:14.420490  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:14.420518  110468 reflector.go:243] k8s.io/client-go/informers/factory.go:133: forcing resync
I0814 08:41:14.512515  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.894515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.612176  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (2.569321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.614342  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.600374ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.616500  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/waiting-pod: (1.600917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.621576  110468 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/waiting-pod: (4.58009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.625763  110468 scheduling_queue.go:830] About to try and schedule pod preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:14.625809  110468 scheduler.go:473] Skip schedule deleting pod: preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/preemptor-pod
I0814 08:41:14.627503  110468 httplog.go:90] DELETE /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (5.508379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52162]
I0814 08:41:14.628386  110468 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/events: (2.141653ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:14.630649  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/waiting-pod: (1.48249ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:14.633376  110468 httplog.go:90] GET /api/v1/namespaces/preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/pods/preemptor-pod: (1.06512ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
E0814 08:41:14.633921  110468 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0814 08:41:14.634355  110468 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=28905&timeout=9m44s&timeoutSeconds=584&watch=true: (1m1.230161798s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48540]
I0814 08:41:14.634372  110468 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=28905&timeout=7m24s&timeoutSeconds=444&watch=true: (1m1.230197794s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48310]
I0814 08:41:14.634395  110468 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=28903&timeout=5m55s&timeoutSeconds=355&watch=true: (1m1.230632799s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48530]
I0814 08:41:14.634460  110468 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=28905&timeout=8m43s&timeoutSeconds=523&watch=true: (1m1.229731569s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48534]
I0814 08:41:14.634556  110468 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=29348&timeout=6m57s&timeoutSeconds=417&watch=true: (1m1.229669843s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48490]
I0814 08:41:14.634611  110468 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=28903&timeout=6m22s&timeoutSeconds=382&watch=true: (1m1.231631283s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48528]
I0814 08:41:14.634613  110468 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=28903&timeout=8m16s&timeoutSeconds=496&watch=true: (1m1.230259946s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48532]
I0814 08:41:14.634557  110468 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=28905&timeout=6m12s&timeoutSeconds=372&watch=true: (1m1.233154471s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48538]
I0814 08:41:14.634705  110468 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=28903&timeout=8m43s&timeoutSeconds=523&watch=true: (1m1.228272771s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48544]
I0814 08:41:14.634780  110468 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=28903&timeout=5m45s&timeoutSeconds=345&watch=true: (1m1.227988159s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48542]
I0814 08:41:14.634828  110468 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=28905&timeout=5m50s&timeoutSeconds=350&watch=true: (1m1.233027324s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48536]
I0814 08:41:14.639656  110468 httplog.go:90] DELETE /api/v1/nodes: (4.924139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:14.639876  110468 controller.go:176] Shutting down kubernetes service endpoint reconciler
I0814 08:41:14.641508  110468 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.265999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
I0814 08:41:14.643623  110468 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.74976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52160]
--- FAIL: TestPreemptWithPermitPlugin (64.79s)
    framework_test.go:1618: Expected the preemptor pod to be scheduled. error: timed out waiting for the condition
    framework_test.go:1622: Expected the waiting pod to get preempted and deleted

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-083326.xml

Find preempt-with-permit-plugin1a64c135-c498-4986-b34f-b6b8c2c359ff/waiting-pod mentions in log files | View test history on testgrid


k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration TestScaleSubresource 1.64s

go test -v k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration -run TestScaleSubresource$
=== RUN   TestScaleSubresource
I0814 08:42:13.340556  112778 secure_serving.go:160] Stopped listening on 127.0.0.1:33617
I0814 08:42:13.340603  112778 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController
I0814 08:42:13.340642  112778 establishing_controller.go:84] Shutting down EstablishingController
I0814 08:42:13.340691  112778 crd_finalizer.go:267] Shutting down CRDFinalizer
I0814 08:42:13.340729  112778 customresource_discovery_controller.go:219] Shutting down DiscoveryController
I0814 08:42:13.340754  112778 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I0814 08:42:13.340675  112778 naming_controller.go:299] Shutting down NamingConditionController
I0814 08:42:13.341374  112778 serving.go:312] Generated self-signed cert (/tmp/apiextensions-apiserver444357889/apiserver.crt, /tmp/apiextensions-apiserver444357889/apiserver.key)
W0814 08:42:14.069771  112778 reflector.go:305] k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:117: watch of *apiextensions.CustomResourceDefinition ended with: very short watch: k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:117: Unexpected watch close - watch lasted less than a second and no items received
W0814 08:42:14.146761  112778 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 08:42:14.148427  112778 client.go:354] parsed scheme: ""
I0814 08:42:14.148452  112778 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:42:14.148517  112778 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:42:14.148578  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:42:14.150267  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 08:42:14.150757  112778 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 08:42:14.154054  112778 secure_serving.go:116] Serving securely on 127.0.0.1:44769
I0814 08:42:14.154148  112778 crd_finalizer.go:255] Starting CRDFinalizer
I0814 08:42:14.154288  112778 naming_controller.go:288] Starting NamingConditionController
I0814 08:42:14.154311  112778 establishing_controller.go:73] Starting EstablishingController
I0814 08:42:14.154328  112778 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0814 08:42:14.154345  112778 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0814 08:42:14.154965  112778 customresource_discovery_controller.go:208] Starting DiscoveryController
E0814 08:42:14.155275  112778 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get http://127.1.2.3:12345/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.1.2.3:12345: connect: connection refused
I0814 08:42:14.341957  112778 client.go:354] parsed scheme: ""
I0814 08:42:14.341997  112778 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:42:14.342145  112778 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:42:14.342223  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:42:14.342714  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:42:14.866576  112778 client.go:354] parsed scheme: ""
I0814 08:42:14.866606  112778 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:42:14.866646  112778 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:42:14.866728  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:42:14.867293  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:42:14.868165  112778 client.go:354] parsed scheme: ""
I0814 08:42:14.868200  112778 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:42:14.868263  112778 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:42:14.868355  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:42:14.869114  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 08:42:14.976992  112778 cacher.go:154] Terminating all watchers from cacher *apiextensions.CustomResourceDefinition
--- FAIL: TestScaleSubresource (1.64s)
    testserver.go:141: runtime-config=map[api/all:true]
    testserver.go:142: Starting apiextensions-apiserver on port 44769...
    testserver.go:160: Waiting for /healthz to be ok...
    subresources_test.go:345: Scale.Metadata.SelfLink: expected: /apis/mygroup.example.com/v1beta1/namespaces/not-the-default/noxus/foo/scale, got: 

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-083326.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration TestSelfLink 2.14s

go test -v k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration -run TestSelfLink$
=== RUN   TestSelfLink
I0814 08:40:33.158737  112778 crd_finalizer.go:267] Shutting down CRDFinalizer
I0814 08:40:33.158756  112778 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I0814 08:40:33.158776  112778 naming_controller.go:299] Shutting down NamingConditionController
I0814 08:40:33.158794  112778 customresource_discovery_controller.go:219] Shutting down DiscoveryController
I0814 08:40:33.158813  112778 establishing_controller.go:84] Shutting down EstablishingController
I0814 08:40:33.159738  112778 serving.go:312] Generated self-signed cert (/tmp/apiextensions-apiserver907409869/apiserver.crt, /tmp/apiextensions-apiserver907409869/apiserver.key)
W0814 08:40:33.355119  112778 reflector.go:305] k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:117: watch of *apiextensions.CustomResourceDefinition ended with: very short watch: k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:117: Unexpected watch close - watch lasted less than a second and no items received
W0814 08:40:33.832454  112778 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 08:40:33.834210  112778 client.go:354] parsed scheme: ""
I0814 08:40:33.834235  112778 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:33.834277  112778 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:33.834325  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:33.834801  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:33.834823  112778 client.go:354] parsed scheme: ""
I0814 08:40:33.834882  112778 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:33.834959  112778 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:33.835047  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:33.835508  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 08:40:33.837394  112778 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0814 08:40:33.839706  112778 secure_serving.go:116] Serving securely on 127.0.0.1:41187
I0814 08:40:33.840592  112778 crd_finalizer.go:255] Starting CRDFinalizer
I0814 08:40:33.840638  112778 customresource_discovery_controller.go:208] Starting DiscoveryController
I0814 08:40:33.840646  112778 naming_controller.go:288] Starting NamingConditionController
I0814 08:40:33.840653  112778 establishing_controller.go:73] Starting EstablishingController
I0814 08:40:33.840661  112778 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0814 08:40:33.840673  112778 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
E0814 08:40:33.841937  112778 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get http://127.1.2.3:12345/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.1.2.3:12345: connect: connection refused
I0814 08:40:34.161487  112778 client.go:354] parsed scheme: ""
I0814 08:40:34.161513  112778 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:34.161549  112778 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:34.161627  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:34.162164  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 08:40:34.490251  112778 reflector.go:305] k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:117: watch of *apiextensions.CustomResourceDefinition ended with: very short watch: k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:117: Unexpected watch close - watch lasted less than a second and no items received
I0814 08:40:34.745730  112778 client.go:354] parsed scheme: ""
I0814 08:40:34.745761  112778 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:34.745800  112778 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:34.745862  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:34.746379  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E0814 08:40:34.842519  112778 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get http://127.1.2.3:12345/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.1.2.3:12345: connect: connection refused
I0814 08:40:35.259469  112778 client.go:354] parsed scheme: ""
I0814 08:40:35.259497  112778 client.go:354] scheme "" not registered, fallback to default scheme
I0814 08:40:35.259553  112778 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0814 08:40:35.259605  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0814 08:40:35.260173  112778 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0814 08:40:35.299568  112778 cacher.go:154] Terminating all watchers from cacher *apiextensions.CustomResourceDefinition
--- FAIL: TestSelfLink (2.14s)
    testserver.go:141: runtime-config=map[api/all:true]
    testserver.go:142: Starting apiextensions-apiserver on port 41187...
    testserver.go:160: Waiting for /healthz to be ok...
    basic_test.go:649: expected /apis/mygroup.example.com/v1beta1/namespaces/not-the-default/noxus/foo, got 
    basic_test.go:668: expected /apis/mygroup.example.com/v1beta1/curlets/foo, got 

				from junit_eb089aee80105aff5db0557ae4449d31f19359f2_20190814-083326.xml

Filter through log files | View test history on testgrid


Show 2466 Passed Tests